00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 877 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3537 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.123 Fetching changes from the remote Git repository 00:00:00.125 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.734 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.746 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.758 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:04.758 > git config core.sparsecheckout # timeout=10 00:00:04.770 > git read-tree -mu HEAD # timeout=10 00:00:04.784 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:04.801 Commit message: "scripts/kid: add issue 3551" 00:00:04.801 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:04.882 [Pipeline] Start of Pipeline 00:00:04.897 [Pipeline] library 00:00:04.898 Loading library shm_lib@master 00:00:04.899 Library shm_lib@master is cached. Copying from home. 00:00:04.916 [Pipeline] node 00:00:04.945 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.946 [Pipeline] { 00:00:04.956 [Pipeline] catchError 00:00:04.957 [Pipeline] { 00:00:04.969 [Pipeline] wrap 00:00:04.976 [Pipeline] { 00:00:04.985 [Pipeline] stage 00:00:04.987 [Pipeline] { (Prologue) 00:00:05.196 [Pipeline] sh 00:00:06.083 + logger -p user.info -t JENKINS-CI 00:00:06.106 [Pipeline] echo 00:00:06.107 Node: GP11 00:00:06.115 [Pipeline] sh 00:00:06.454 [Pipeline] setCustomBuildProperty 00:00:06.463 [Pipeline] echo 00:00:06.465 Cleanup processes 00:00:06.469 [Pipeline] sh 00:00:06.761 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.761 4622 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.777 [Pipeline] sh 00:00:07.069 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.069 ++ grep -v 'sudo pgrep' 00:00:07.069 ++ awk '{print $1}' 00:00:07.069 + sudo kill -9 00:00:07.069 + true 00:00:07.089 [Pipeline] cleanWs 00:00:07.100 [WS-CLEANUP] Deleting project workspace... 00:00:07.100 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.115 [WS-CLEANUP] done 00:00:07.120 [Pipeline] setCustomBuildProperty 00:00:07.135 [Pipeline] sh 00:00:07.426 + sudo git config --global --replace-all safe.directory '*' 00:00:07.507 [Pipeline] httpRequest 00:00:09.378 [Pipeline] echo 00:00:09.380 Sorcerer 10.211.164.101 is alive 00:00:09.391 [Pipeline] retry 00:00:09.394 [Pipeline] { 00:00:09.409 [Pipeline] httpRequest 00:00:09.415 HttpMethod: GET 00:00:09.415 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:09.417 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:09.428 Response Code: HTTP/1.1 200 OK 00:00:09.428 Success: Status code 200 is in the accepted range: 200,404 00:00:09.429 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:12.489 [Pipeline] } 00:00:12.507 [Pipeline] // retry 00:00:12.514 [Pipeline] sh 00:00:12.806 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:12.824 [Pipeline] httpRequest 00:00:13.212 [Pipeline] echo 00:00:13.214 Sorcerer 10.211.164.101 is alive 00:00:13.223 [Pipeline] retry 00:00:13.225 [Pipeline] { 00:00:13.240 [Pipeline] httpRequest 00:00:13.245 HttpMethod: GET 00:00:13.245 URL: http://10.211.164.101/packages/spdk_b6849ff4773577992445b4734479476cb8aca324.tar.gz 00:00:13.247 Sending request to url: http://10.211.164.101/packages/spdk_b6849ff4773577992445b4734479476cb8aca324.tar.gz 00:00:13.267 Response Code: HTTP/1.1 200 OK 00:00:13.267 Success: Status code 200 is in the accepted range: 200,404 00:00:13.268 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b6849ff4773577992445b4734479476cb8aca324.tar.gz 00:01:24.738 [Pipeline] } 00:01:24.751 [Pipeline] // retry 00:01:24.758 [Pipeline] sh 00:01:25.055 + tar --no-same-owner -xf spdk_b6849ff4773577992445b4734479476cb8aca324.tar.gz 00:01:28.390 [Pipeline] sh 00:01:28.685 + git -C spdk log --oneline -n5 00:01:28.685 b6849ff47 test/compress: Add missing io pattern arg to run_bdeveprf() 00:01:28.685 bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:28.685 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:28.685 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:28.685 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:28.706 [Pipeline] withCredentials 00:01:28.720 > git --version # timeout=10 00:01:28.734 > git --version # 'git version 2.39.2' 00:01:28.765 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.768 [Pipeline] { 00:01:28.779 [Pipeline] retry 00:01:28.782 [Pipeline] { 00:01:28.797 [Pipeline] sh 00:01:29.407 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:29.423 [Pipeline] } 00:01:29.441 [Pipeline] // retry 00:01:29.446 [Pipeline] } 00:01:29.463 [Pipeline] // withCredentials 00:01:29.474 [Pipeline] httpRequest 00:01:29.898 [Pipeline] echo 00:01:29.900 Sorcerer 10.211.164.101 is alive 00:01:29.910 [Pipeline] retry 00:01:29.912 [Pipeline] { 00:01:29.927 [Pipeline] httpRequest 00:01:29.933 HttpMethod: GET 00:01:29.933 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.935 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.944 Response Code: HTTP/1.1 200 OK 00:01:29.944 Success: Status code 200 is in the accepted range: 200,404 00:01:29.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.526 [Pipeline] } 00:01:42.546 [Pipeline] // retry 00:01:42.555 [Pipeline] sh 00:01:42.856 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.267 [Pipeline] sh 00:01:44.565 + git -C dpdk log --oneline -n5 00:01:44.565 eeb0605f11 version: 23.11.0 00:01:44.565 238778122a doc: update release notes for 23.11 00:01:44.565 46aa6b3cfc doc: fix description of RSS features 00:01:44.565 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.565 7e421ae345 devtools: support skipping forbid rule check 00:01:44.578 [Pipeline] } 00:01:44.593 [Pipeline] // stage 00:01:44.604 [Pipeline] stage 00:01:44.606 [Pipeline] { (Prepare) 00:01:44.630 [Pipeline] writeFile 00:01:44.650 [Pipeline] sh 00:01:44.947 + logger -p user.info -t JENKINS-CI 00:01:44.963 [Pipeline] sh 00:01:45.256 + logger -p user.info -t JENKINS-CI 00:01:45.272 [Pipeline] sh 00:01:45.567 + cat autorun-spdk.conf 00:01:45.567 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.567 SPDK_TEST_NVMF=1 00:01:45.567 SPDK_TEST_NVME_CLI=1 00:01:45.567 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.567 SPDK_TEST_NVMF_NICS=e810 00:01:45.567 SPDK_TEST_VFIOUSER=1 00:01:45.567 SPDK_RUN_UBSAN=1 00:01:45.567 NET_TYPE=phy 00:01:45.567 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.567 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.577 RUN_NIGHTLY=1 00:01:45.582 [Pipeline] readFile 00:01:45.637 [Pipeline] withEnv 00:01:45.640 [Pipeline] { 00:01:45.654 [Pipeline] sh 00:01:45.949 + set -ex 00:01:45.949 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.949 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.949 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.949 ++ SPDK_TEST_NVMF=1 00:01:45.949 ++ SPDK_TEST_NVME_CLI=1 00:01:45.949 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.949 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.949 ++ SPDK_TEST_VFIOUSER=1 00:01:45.949 ++ SPDK_RUN_UBSAN=1 00:01:45.949 ++ NET_TYPE=phy 00:01:45.949 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.949 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.949 ++ RUN_NIGHTLY=1 00:01:45.949 + case $SPDK_TEST_NVMF_NICS in 00:01:45.949 + DRIVERS=ice 00:01:45.949 + [[ tcp == \r\d\m\a ]] 00:01:45.949 + [[ -n ice ]] 00:01:45.949 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.949 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:50.184 rmmod: ERROR: Module irdma is not currently loaded 00:01:50.185 rmmod: ERROR: Module i40iw is not currently loaded 00:01:50.185 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:50.185 + true 00:01:50.185 + for D in $DRIVERS 00:01:50.185 + sudo modprobe ice 00:01:50.185 + exit 0 00:01:50.197 [Pipeline] } 00:01:50.212 [Pipeline] // withEnv 00:01:50.218 [Pipeline] } 00:01:50.232 [Pipeline] // stage 00:01:50.243 [Pipeline] catchError 00:01:50.245 [Pipeline] { 00:01:50.260 [Pipeline] timeout 00:01:50.260 Timeout set to expire in 1 hr 0 min 00:01:50.262 [Pipeline] { 00:01:50.277 [Pipeline] stage 00:01:50.280 [Pipeline] { (Tests) 00:01:50.296 [Pipeline] sh 00:01:50.593 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.593 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.593 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.593 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:50.593 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.593 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.593 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:50.593 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.593 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.593 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.593 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:50.593 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.593 + source /etc/os-release 00:01:50.593 ++ NAME='Fedora Linux' 00:01:50.593 ++ VERSION='39 (Cloud Edition)' 00:01:50.593 ++ ID=fedora 00:01:50.593 ++ VERSION_ID=39 00:01:50.593 ++ VERSION_CODENAME= 00:01:50.593 ++ PLATFORM_ID=platform:f39 00:01:50.593 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.593 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.593 ++ LOGO=fedora-logo-icon 00:01:50.593 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.593 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.593 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.593 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.593 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.593 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.593 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.593 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.593 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.593 ++ SUPPORT_END=2024-11-12 00:01:50.593 ++ VARIANT='Cloud Edition' 00:01:50.593 ++ VARIANT_ID=cloud 00:01:50.593 + uname -a 00:01:50.593 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.593 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:51.538 Hugepages 00:01:51.538 node hugesize free / total 00:01:51.538 node0 1048576kB 0 / 0 00:01:51.538 node0 2048kB 0 / 0 00:01:51.538 node1 1048576kB 0 / 0 00:01:51.538 node1 2048kB 0 / 0 00:01:51.538 00:01:51.538 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.538 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:51.538 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:51.538 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:51.538 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:51.538 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:51.799 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:51.799 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:51.799 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:51.799 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:51.799 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:51.799 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:51.799 + rm -f /tmp/spdk-ld-path 00:01:51.799 + source autorun-spdk.conf 00:01:51.799 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.799 ++ SPDK_TEST_NVMF=1 00:01:51.799 ++ SPDK_TEST_NVME_CLI=1 00:01:51.799 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.799 ++ SPDK_TEST_NVMF_NICS=e810 00:01:51.799 ++ SPDK_TEST_VFIOUSER=1 00:01:51.799 ++ SPDK_RUN_UBSAN=1 00:01:51.799 ++ NET_TYPE=phy 00:01:51.799 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:51.799 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.799 ++ RUN_NIGHTLY=1 00:01:51.799 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.799 + [[ -n '' ]] 00:01:51.799 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.799 + for M in /var/spdk/build-*-manifest.txt 00:01:51.799 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:51.799 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:51.799 + for M in /var/spdk/build-*-manifest.txt 00:01:51.799 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.799 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:51.799 + for M in /var/spdk/build-*-manifest.txt 00:01:51.799 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.799 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:51.799 ++ uname 00:01:51.799 + [[ Linux == \L\i\n\u\x ]] 00:01:51.799 + sudo dmesg -T 00:01:51.799 + sudo dmesg --clear 00:01:51.799 + dmesg_pid=5947 00:01:51.799 + [[ Fedora Linux == FreeBSD ]] 00:01:51.799 + sudo dmesg -Tw 00:01:51.799 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.799 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.799 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.799 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.799 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.799 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.799 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.799 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.799 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.799 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.799 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.799 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.799 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.799 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.799 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:51.799 Test configuration: 00:01:51.799 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.799 SPDK_TEST_NVMF=1 00:01:51.799 SPDK_TEST_NVME_CLI=1 00:01:51.799 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.799 SPDK_TEST_NVMF_NICS=e810 00:01:51.799 SPDK_TEST_VFIOUSER=1 00:01:51.799 SPDK_RUN_UBSAN=1 00:01:51.799 NET_TYPE=phy 00:01:51.799 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:51.799 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.799 RUN_NIGHTLY=1 13:12:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:51.799 13:12:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:51.799 13:12:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:51.799 13:12:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.799 13:12:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.799 13:12:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.799 13:12:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.799 13:12:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.800 13:12:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.800 13:12:43 -- paths/export.sh@5 -- $ export PATH 00:01:51.800 13:12:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.800 13:12:43 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:51.800 13:12:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:51.800 13:12:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728904363.XXXXXX 00:01:51.800 13:12:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728904363.3MhfJ9 00:01:51.800 13:12:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:51.800 13:12:43 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:01:51.800 13:12:43 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:51.800 13:12:43 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:51.800 13:12:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:51.800 13:12:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.800 13:12:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:51.800 13:12:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:51.800 13:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.800 13:12:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:51.800 13:12:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:51.800 13:12:43 -- pm/common@17 -- $ local monitor 00:01:51.800 13:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.800 13:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.800 13:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.800 13:12:43 -- pm/common@21 -- $ date +%s 00:01:51.800 13:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.800 13:12:43 -- pm/common@21 -- $ date +%s 00:01:51.800 13:12:43 -- pm/common@25 -- $ sleep 1 00:01:51.800 13:12:43 -- pm/common@21 -- $ date +%s 00:01:51.800 13:12:43 -- pm/common@21 -- $ date +%s 00:01:51.800 13:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728904363 00:01:51.800 13:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728904363 00:01:51.800 13:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728904363 00:01:51.800 13:12:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728904363 00:01:51.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728904363_collect-vmstat.pm.log 00:01:51.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728904363_collect-cpu-load.pm.log 00:01:51.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728904363_collect-cpu-temp.pm.log 00:01:52.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728904363_collect-bmc-pm.bmc.pm.log 00:01:53.008 13:12:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:53.008 13:12:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.008 13:12:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.008 13:12:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.008 13:12:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.008 Mon Oct 14 11:12:44 AM UTC 2024 00:01:53.008 13:12:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.008 v25.01-pre-56-gb6849ff47 00:01:53.008 13:12:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:53.008 13:12:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.008 13:12:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.008 13:12:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:53.008 13:12:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:53.008 13:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.008 ************************************ 00:01:53.008 START TEST ubsan 00:01:53.008 ************************************ 00:01:53.008 13:12:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:53.008 using ubsan 00:01:53.008 00:01:53.008 real 0m0.000s 00:01:53.008 user 0m0.000s 00:01:53.008 sys 0m0.000s 00:01:53.008 13:12:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:53.008 13:12:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.008 ************************************ 00:01:53.008 END TEST ubsan 00:01:53.008 ************************************ 00:01:53.008 13:12:44 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:53.008 13:12:44 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:53.008 13:12:44 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:53.008 13:12:44 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:53.008 13:12:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:53.008 13:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.008 ************************************ 00:01:53.008 START TEST build_native_dpdk 00:01:53.008 ************************************ 00:01:53.008 13:12:44 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:53.008 eeb0605f11 version: 23.11.0 00:01:53.008 238778122a doc: update release notes for 23.11 00:01:53.008 46aa6b3cfc doc: fix description of RSS features 00:01:53.008 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:53.008 7e421ae345 devtools: support skipping forbid rule check 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:53.008 13:12:44 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.008 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:53.009 patching file config/rte_config.h 00:01:53.009 Hunk #1 succeeded at 60 (offset 1 line). 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:53.009 patching file lib/pcapng/rte_pcapng.c 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:53.009 13:12:44 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:53.009 13:12:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.604 The Meson build system 00:01:59.604 Version: 1.5.0 00:01:59.604 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:59.604 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:59.604 Build type: native build 00:01:59.604 Program cat found: YES (/usr/bin/cat) 00:01:59.604 Project name: DPDK 00:01:59.604 Project version: 23.11.0 00:01:59.604 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.604 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:59.604 Host machine cpu family: x86_64 00:01:59.604 Host machine cpu: x86_64 00:01:59.604 Message: ## Building in Developer Mode ## 00:01:59.604 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.604 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:59.604 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.604 Program python3 found: YES (/usr/bin/python3) 00:01:59.604 Program cat found: YES (/usr/bin/cat) 00:01:59.604 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.604 Compiler for C supports arguments -march=native: YES 00:01:59.604 Checking for size of "void *" : 8 00:01:59.604 Checking for size of "void *" : 8 (cached) 00:01:59.604 Library m found: YES 00:01:59.604 Library numa found: YES 00:01:59.604 Has header "numaif.h" : YES 00:01:59.604 Library fdt found: NO 00:01:59.604 Library execinfo found: NO 00:01:59.604 Has header "execinfo.h" : YES 00:01:59.604 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.604 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.604 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.604 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.605 Run-time dependency openssl found: YES 3.1.1 00:01:59.605 Run-time dependency libpcap found: YES 1.10.4 00:01:59.605 Has header "pcap.h" with dependency libpcap: YES 00:01:59.605 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.605 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.605 Compiler for C supports arguments -Wformat: YES 00:01:59.605 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.605 Compiler for C supports arguments -Wformat-security: NO 00:01:59.605 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.605 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.605 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.605 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.605 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.605 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.605 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.605 Compiler for C supports arguments -Wundef: YES 00:01:59.605 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.605 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.605 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.605 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.605 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.605 Program objdump found: YES (/usr/bin/objdump) 00:01:59.605 Compiler for C supports arguments -mavx512f: YES 00:01:59.605 Checking if "AVX512 checking" compiles: YES 00:01:59.605 Fetching value of define "__SSE4_2__" : 1 00:01:59.605 Fetching value of define "__AES__" : 1 00:01:59.605 Fetching value of define "__AVX__" : 1 00:01:59.605 Fetching value of define "__AVX2__" : (undefined) 00:01:59.605 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.605 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.605 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.605 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.605 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.605 Fetching value of define "__PCLMUL__" : 1 00:01:59.605 Fetching value of define "__RDRND__" : 1 00:01:59.605 Fetching value of define "__RDSEED__" : (undefined) 00:01:59.605 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.605 Fetching value of define "__znver1__" : (undefined) 00:01:59.605 Fetching value of define "__znver2__" : (undefined) 00:01:59.605 Fetching value of define "__znver3__" : (undefined) 00:01:59.605 Fetching value of define "__znver4__" : (undefined) 00:01:59.605 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.605 Message: lib/log: Defining dependency "log" 00:01:59.605 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.605 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.605 Checking for function "getentropy" : NO 00:01:59.605 Message: lib/eal: Defining dependency "eal" 00:01:59.605 Message: lib/ring: Defining dependency "ring" 00:01:59.605 Message: lib/rcu: Defining dependency "rcu" 00:01:59.605 Message: lib/mempool: Defining dependency "mempool" 00:01:59.605 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.605 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.605 Compiler for C supports arguments -mpclmul: YES 00:01:59.605 Compiler for C supports arguments -maes: YES 00:01:59.605 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.605 Compiler for C supports arguments -mavx512bw: YES 00:01:59.605 Compiler for C supports arguments -mavx512dq: YES 00:01:59.605 Compiler for C supports arguments -mavx512vl: YES 00:01:59.605 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.605 Compiler for C supports arguments -mavx2: YES 00:01:59.605 Compiler for C supports arguments -mavx: YES 00:01:59.605 Message: lib/net: Defining dependency "net" 00:01:59.605 Message: lib/meter: Defining dependency "meter" 00:01:59.605 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.605 Message: lib/pci: Defining dependency "pci" 00:01:59.605 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.605 Message: lib/metrics: Defining dependency "metrics" 00:01:59.605 Message: lib/hash: Defining dependency "hash" 00:01:59.605 Message: lib/timer: Defining dependency "timer" 00:01:59.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:59.605 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:59.605 Message: lib/acl: Defining dependency "acl" 00:01:59.605 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.605 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.605 Run-time dependency libelf found: YES 0.191 00:01:59.605 Message: lib/bpf: Defining dependency "bpf" 00:01:59.605 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.605 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.605 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.605 Message: lib/distributor: Defining dependency "distributor" 00:01:59.605 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.605 Message: lib/efd: Defining dependency "efd" 00:01:59.605 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.605 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:59.605 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.605 Message: lib/gro: Defining dependency "gro" 00:01:59.605 Message: lib/gso: Defining dependency "gso" 00:01:59.605 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.605 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.605 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.605 Message: lib/lpm: Defining dependency "lpm" 00:01:59.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.605 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.605 Message: lib/member: Defining dependency "member" 00:01:59.605 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.605 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.605 Message: lib/power: Defining dependency "power" 00:01:59.605 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.605 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.605 Message: lib/mldev: Defining dependency "mldev" 00:01:59.605 Message: lib/rib: Defining dependency "rib" 00:01:59.605 Message: lib/reorder: Defining dependency "reorder" 00:01:59.605 Message: lib/sched: Defining dependency "sched" 00:01:59.605 Message: lib/security: Defining dependency "security" 00:01:59.605 Message: lib/stack: Defining dependency "stack" 00:01:59.605 Has header "linux/userfaultfd.h" : YES 00:01:59.605 Has header "linux/vduse.h" : YES 00:01:59.605 Message: lib/vhost: Defining dependency "vhost" 00:01:59.605 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.605 Message: lib/pdcp: Defining dependency "pdcp" 00:01:59.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.605 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.605 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:59.605 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:59.605 Message: lib/fib: Defining dependency "fib" 00:01:59.605 Message: lib/port: Defining dependency "port" 00:01:59.605 Message: lib/pdump: Defining dependency "pdump" 00:01:59.605 Message: lib/table: Defining dependency "table" 00:01:59.605 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.605 Message: lib/graph: Defining dependency "graph" 00:01:59.605 Message: lib/node: Defining dependency "node" 00:02:00.994 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.995 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.995 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.995 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.995 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:00.995 Compiler for C supports arguments -Wno-unused-value: YES 00:02:00.995 Compiler for C supports arguments -Wno-format: YES 00:02:00.995 Compiler for C supports arguments -Wno-format-security: YES 00:02:00.995 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.995 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.995 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.995 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.995 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.995 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.995 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.995 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.995 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.995 Has header "sys/epoll.h" : YES 00:02:00.995 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.995 Configuring doxy-api-html.conf using configuration 00:02:00.995 Configuring doxy-api-man.conf using configuration 00:02:00.995 Program mandb found: YES (/usr/bin/mandb) 00:02:00.995 Program sphinx-build found: NO 00:02:00.995 Configuring rte_build_config.h using configuration 00:02:00.995 Message: 00:02:00.995 ================= 00:02:00.995 Applications Enabled 00:02:00.995 ================= 00:02:00.995 00:02:00.995 apps: 00:02:00.995 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:00.995 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:00.995 test-pmd, test-regex, test-sad, test-security-perf, 00:02:00.995 00:02:00.995 Message: 00:02:00.995 ================= 00:02:00.995 Libraries Enabled 00:02:00.995 ================= 00:02:00.995 00:02:00.995 libs: 00:02:00.995 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.995 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:00.995 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:00.995 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:00.995 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:00.995 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:00.995 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:00.995 00:02:00.995 00:02:00.995 Message: 00:02:00.995 =============== 00:02:00.995 Drivers Enabled 00:02:00.995 =============== 00:02:00.995 00:02:00.995 common: 00:02:00.995 00:02:00.995 bus: 00:02:00.995 pci, vdev, 00:02:00.995 mempool: 00:02:00.995 ring, 00:02:00.995 dma: 00:02:00.995 00:02:00.995 net: 00:02:00.995 i40e, 00:02:00.995 raw: 00:02:00.995 00:02:00.995 crypto: 00:02:00.995 00:02:00.995 compress: 00:02:00.995 00:02:00.995 regex: 00:02:00.995 00:02:00.995 ml: 00:02:00.995 00:02:00.995 vdpa: 00:02:00.995 00:02:00.995 event: 00:02:00.995 00:02:00.995 baseband: 00:02:00.995 00:02:00.995 gpu: 00:02:00.995 00:02:00.995 00:02:00.995 Message: 00:02:00.995 ================= 00:02:00.995 Content Skipped 00:02:00.995 ================= 00:02:00.995 00:02:00.995 apps: 00:02:00.995 00:02:00.995 libs: 00:02:00.995 00:02:00.995 drivers: 00:02:00.995 common/cpt: not in enabled drivers build config 00:02:00.995 common/dpaax: not in enabled drivers build config 00:02:00.995 common/iavf: not in enabled drivers build config 00:02:00.995 common/idpf: not in enabled drivers build config 00:02:00.995 common/mvep: not in enabled drivers build config 00:02:00.995 common/octeontx: not in enabled drivers build config 00:02:00.995 bus/auxiliary: not in enabled drivers build config 00:02:00.995 bus/cdx: not in enabled drivers build config 00:02:00.995 bus/dpaa: not in enabled drivers build config 00:02:00.995 bus/fslmc: not in enabled drivers build config 00:02:00.995 bus/ifpga: not in enabled drivers build config 00:02:00.995 bus/platform: not in enabled drivers build config 00:02:00.995 bus/vmbus: not in enabled drivers build config 00:02:00.995 common/cnxk: not in enabled drivers build config 00:02:00.995 common/mlx5: not in enabled drivers build config 00:02:00.995 common/nfp: not in enabled drivers build config 00:02:00.995 common/qat: not in enabled drivers build config 00:02:00.995 common/sfc_efx: not in enabled drivers build config 00:02:00.995 mempool/bucket: not in enabled drivers build config 00:02:00.995 mempool/cnxk: not in enabled drivers build config 00:02:00.995 mempool/dpaa: not in enabled drivers build config 00:02:00.995 mempool/dpaa2: not in enabled drivers build config 00:02:00.995 mempool/octeontx: not in enabled drivers build config 00:02:00.995 mempool/stack: not in enabled drivers build config 00:02:00.995 dma/cnxk: not in enabled drivers build config 00:02:00.995 dma/dpaa: not in enabled drivers build config 00:02:00.995 dma/dpaa2: not in enabled drivers build config 00:02:00.995 dma/hisilicon: not in enabled drivers build config 00:02:00.995 dma/idxd: not in enabled drivers build config 00:02:00.995 dma/ioat: not in enabled drivers build config 00:02:00.995 dma/skeleton: not in enabled drivers build config 00:02:00.995 net/af_packet: not in enabled drivers build config 00:02:00.995 net/af_xdp: not in enabled drivers build config 00:02:00.995 net/ark: not in enabled drivers build config 00:02:00.995 net/atlantic: not in enabled drivers build config 00:02:00.995 net/avp: not in enabled drivers build config 00:02:00.995 net/axgbe: not in enabled drivers build config 00:02:00.995 net/bnx2x: not in enabled drivers build config 00:02:00.995 net/bnxt: not in enabled drivers build config 00:02:00.995 net/bonding: not in enabled drivers build config 00:02:00.995 net/cnxk: not in enabled drivers build config 00:02:00.995 net/cpfl: not in enabled drivers build config 00:02:00.995 net/cxgbe: not in enabled drivers build config 00:02:00.995 net/dpaa: not in enabled drivers build config 00:02:00.995 net/dpaa2: not in enabled drivers build config 00:02:00.995 net/e1000: not in enabled drivers build config 00:02:00.995 net/ena: not in enabled drivers build config 00:02:00.995 net/enetc: not in enabled drivers build config 00:02:00.995 net/enetfec: not in enabled drivers build config 00:02:00.995 net/enic: not in enabled drivers build config 00:02:00.995 net/failsafe: not in enabled drivers build config 00:02:00.995 net/fm10k: not in enabled drivers build config 00:02:00.995 net/gve: not in enabled drivers build config 00:02:00.995 net/hinic: not in enabled drivers build config 00:02:00.995 net/hns3: not in enabled drivers build config 00:02:00.995 net/iavf: not in enabled drivers build config 00:02:00.995 net/ice: not in enabled drivers build config 00:02:00.995 net/idpf: not in enabled drivers build config 00:02:00.995 net/igc: not in enabled drivers build config 00:02:00.995 net/ionic: not in enabled drivers build config 00:02:00.995 net/ipn3ke: not in enabled drivers build config 00:02:00.995 net/ixgbe: not in enabled drivers build config 00:02:00.995 net/mana: not in enabled drivers build config 00:02:00.995 net/memif: not in enabled drivers build config 00:02:00.995 net/mlx4: not in enabled drivers build config 00:02:00.995 net/mlx5: not in enabled drivers build config 00:02:00.995 net/mvneta: not in enabled drivers build config 00:02:00.995 net/mvpp2: not in enabled drivers build config 00:02:00.995 net/netvsc: not in enabled drivers build config 00:02:00.995 net/nfb: not in enabled drivers build config 00:02:00.995 net/nfp: not in enabled drivers build config 00:02:00.995 net/ngbe: not in enabled drivers build config 00:02:00.995 net/null: not in enabled drivers build config 00:02:00.995 net/octeontx: not in enabled drivers build config 00:02:00.995 net/octeon_ep: not in enabled drivers build config 00:02:00.995 net/pcap: not in enabled drivers build config 00:02:00.995 net/pfe: not in enabled drivers build config 00:02:00.995 net/qede: not in enabled drivers build config 00:02:00.995 net/ring: not in enabled drivers build config 00:02:00.995 net/sfc: not in enabled drivers build config 00:02:00.995 net/softnic: not in enabled drivers build config 00:02:00.995 net/tap: not in enabled drivers build config 00:02:00.995 net/thunderx: not in enabled drivers build config 00:02:00.995 net/txgbe: not in enabled drivers build config 00:02:00.995 net/vdev_netvsc: not in enabled drivers build config 00:02:00.995 net/vhost: not in enabled drivers build config 00:02:00.995 net/virtio: not in enabled drivers build config 00:02:00.995 net/vmxnet3: not in enabled drivers build config 00:02:00.995 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.995 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.995 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.995 raw/ifpga: not in enabled drivers build config 00:02:00.995 raw/ntb: not in enabled drivers build config 00:02:00.995 raw/skeleton: not in enabled drivers build config 00:02:00.995 crypto/armv8: not in enabled drivers build config 00:02:00.995 crypto/bcmfs: not in enabled drivers build config 00:02:00.995 crypto/caam_jr: not in enabled drivers build config 00:02:00.995 crypto/ccp: not in enabled drivers build config 00:02:00.995 crypto/cnxk: not in enabled drivers build config 00:02:00.995 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.995 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.995 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.995 crypto/mlx5: not in enabled drivers build config 00:02:00.995 crypto/mvsam: not in enabled drivers build config 00:02:00.995 crypto/nitrox: not in enabled drivers build config 00:02:00.995 crypto/null: not in enabled drivers build config 00:02:00.995 crypto/octeontx: not in enabled drivers build config 00:02:00.995 crypto/openssl: not in enabled drivers build config 00:02:00.995 crypto/scheduler: not in enabled drivers build config 00:02:00.995 crypto/uadk: not in enabled drivers build config 00:02:00.995 crypto/virtio: not in enabled drivers build config 00:02:00.995 compress/isal: not in enabled drivers build config 00:02:00.995 compress/mlx5: not in enabled drivers build config 00:02:00.995 compress/octeontx: not in enabled drivers build config 00:02:00.995 compress/zlib: not in enabled drivers build config 00:02:00.995 regex/mlx5: not in enabled drivers build config 00:02:00.995 regex/cn9k: not in enabled drivers build config 00:02:00.995 ml/cnxk: not in enabled drivers build config 00:02:00.995 vdpa/ifc: not in enabled drivers build config 00:02:00.995 vdpa/mlx5: not in enabled drivers build config 00:02:00.995 vdpa/nfp: not in enabled drivers build config 00:02:00.995 vdpa/sfc: not in enabled drivers build config 00:02:00.995 event/cnxk: not in enabled drivers build config 00:02:00.996 event/dlb2: not in enabled drivers build config 00:02:00.996 event/dpaa: not in enabled drivers build config 00:02:00.996 event/dpaa2: not in enabled drivers build config 00:02:00.996 event/dsw: not in enabled drivers build config 00:02:00.996 event/opdl: not in enabled drivers build config 00:02:00.996 event/skeleton: not in enabled drivers build config 00:02:00.996 event/sw: not in enabled drivers build config 00:02:00.996 event/octeontx: not in enabled drivers build config 00:02:00.996 baseband/acc: not in enabled drivers build config 00:02:00.996 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.996 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.996 baseband/la12xx: not in enabled drivers build config 00:02:00.996 baseband/null: not in enabled drivers build config 00:02:00.996 baseband/turbo_sw: not in enabled drivers build config 00:02:00.996 gpu/cuda: not in enabled drivers build config 00:02:00.996 00:02:00.996 00:02:00.996 Build targets in project: 220 00:02:00.996 00:02:00.996 DPDK 23.11.0 00:02:00.996 00:02:00.996 User defined options 00:02:00.996 libdir : lib 00:02:00.996 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.996 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.996 c_link_args : 00:02:00.996 enable_docs : false 00:02:00.996 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.996 enable_kmods : false 00:02:00.996 machine : native 00:02:00.996 tests : false 00:02:00.996 00:02:00.996 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.996 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.996 13:12:52 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:00.996 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:00.996 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.996 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.996 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.264 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.264 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.264 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.264 [7/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.264 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.264 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.264 [10/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.264 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.264 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.264 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.264 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.264 [15/710] Linking static target lib/librte_kvargs.a 00:02:01.264 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.264 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.527 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.527 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.527 [20/710] Linking static target lib/librte_log.a 00:02:01.527 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.795 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.059 [23/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.059 [24/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.059 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.059 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.324 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.324 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.324 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.324 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.324 [31/710] Linking target lib/librte_log.so.24.0 00:02:02.324 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.324 [33/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.324 [34/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.324 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.324 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.324 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.324 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.324 [39/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.324 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.324 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.324 [42/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.324 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.324 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.324 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.324 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.324 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.324 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.324 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.324 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.324 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.324 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.324 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.584 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.584 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.584 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.584 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.584 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.584 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.584 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.584 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.584 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.849 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.849 [64/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.849 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.849 [66/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.849 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.112 [68/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.112 [69/710] Linking static target lib/librte_pci.a 00:02:03.112 [70/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.112 [71/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.112 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.112 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.378 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.378 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.378 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.378 [77/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.378 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.378 [79/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.378 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.378 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.378 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.378 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.378 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.378 [85/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.378 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.378 [87/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.378 [88/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.378 [89/710] Linking static target lib/librte_ring.a 00:02:03.642 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.642 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.642 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.642 [93/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.642 [94/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.642 [95/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.642 [96/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.642 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.642 [98/710] Linking static target lib/librte_meter.a 00:02:03.642 [99/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.642 [100/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.642 [101/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.907 [102/710] Linking static target lib/librte_telemetry.a 00:02:03.907 [103/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.907 [104/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.907 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.907 [106/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.907 [107/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.907 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.907 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.907 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.907 [111/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.907 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.172 [113/710] Linking static target lib/librte_eal.a 00:02:04.172 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.172 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.172 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.172 [117/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.172 [118/710] Linking static target lib/librte_net.a 00:02:04.172 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.172 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.172 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.172 [122/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.437 [123/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.437 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.437 [125/710] Linking static target lib/librte_mempool.a 00:02:04.437 [126/710] Linking static target lib/librte_cmdline.a 00:02:04.437 [127/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.704 [128/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:04.704 [129/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.704 [130/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:04.704 [131/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:04.704 [132/710] Linking static target lib/librte_cfgfile.a 00:02:04.704 [133/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:04.704 [134/710] Linking static target lib/librte_metrics.a 00:02:04.704 [135/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:04.704 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:04.704 [137/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:04.704 [138/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.973 [139/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:04.973 [140/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.973 [141/710] Linking static target lib/librte_rcu.a 00:02:04.973 [142/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:04.973 [143/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:04.973 [144/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:04.973 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:04.973 [146/710] Linking static target lib/librte_bitratestats.a 00:02:04.973 [147/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:04.973 [148/710] Linking target lib/librte_kvargs.so.24.0 00:02:04.973 [149/710] Linking target lib/librte_telemetry.so.24.0 00:02:05.237 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:05.237 [151/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.237 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.237 [153/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:05.237 [154/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.237 [155/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:05.237 [156/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:05.237 [157/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.237 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.506 [159/710] Linking static target lib/librte_timer.a 00:02:05.506 [160/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:05.506 [161/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:05.506 [162/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.506 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.506 [164/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.506 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.506 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:05.506 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:05.773 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:05.773 [169/710] Linking static target lib/librte_bbdev.a 00:02:05.773 [170/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.773 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.773 [172/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.773 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:05.773 [174/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:06.037 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:06.037 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.037 [177/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:06.037 [178/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:06.037 [179/710] Linking static target lib/librte_compressdev.a 00:02:06.037 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:06.037 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:06.299 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:06.299 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:06.566 [184/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:06.567 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:06.567 [186/710] Linking static target lib/librte_distributor.a 00:02:06.567 [187/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:06.567 [188/710] Linking static target lib/librte_dmadev.a 00:02:06.567 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:06.567 [190/710] Linking static target lib/librte_bpf.a 00:02:06.567 [191/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:06.567 [192/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.840 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:06.840 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.840 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:06.840 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:06.840 [197/710] Linking static target lib/librte_dispatcher.a 00:02:06.840 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:06.840 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:06.840 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:07.107 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:07.107 [202/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.107 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:07.107 [204/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:07.107 [205/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.107 [206/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.107 [207/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:07.107 [208/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:07.107 [209/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:07.107 [210/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:07.107 [211/710] Linking static target lib/librte_gpudev.a 00:02:07.107 [212/710] Linking static target lib/librte_gro.a 00:02:07.107 [213/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.108 [214/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:07.108 [215/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.108 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.371 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:07.371 [218/710] Linking static target lib/librte_jobstats.a 00:02:07.371 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:07.371 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:07.638 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.638 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:07.638 [223/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:07.638 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:07.638 [225/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:07.638 [226/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.638 [227/710] Linking static target lib/librte_latencystats.a 00:02:07.908 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:07.908 [229/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.908 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:07.908 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:07.908 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:07.908 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:07.908 [234/710] Linking static target lib/librte_ip_frag.a 00:02:07.908 [235/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:08.175 [236/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:08.175 [237/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:08.175 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.175 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:08.442 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.442 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.442 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.442 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:08.442 [244/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.442 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.442 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:08.442 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:08.442 [248/710] Linking static target lib/librte_gso.a 00:02:08.708 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:08.708 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:08.708 [251/710] Linking static target lib/librte_regexdev.a 00:02:08.708 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.708 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:08.708 [254/710] Linking static target lib/librte_rawdev.a 00:02:08.708 [255/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:08.971 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:08.971 [257/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.971 [258/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:08.971 [259/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:08.971 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:08.971 [261/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:08.971 [262/710] Linking static target lib/librte_efd.a 00:02:08.971 [263/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:08.971 [264/710] Linking static target lib/librte_mldev.a 00:02:08.971 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:09.240 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:09.240 [267/710] Linking static target lib/librte_pcapng.a 00:02:09.240 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:09.240 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:09.240 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:09.240 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:02:09.240 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:09.240 [273/710] Linking static target lib/librte_stack.a 00:02:09.240 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:09.240 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:09.240 [276/710] Linking static target lib/librte_lpm.a 00:02:09.503 [277/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.503 [278/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.503 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.503 [280/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.503 [281/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.503 [282/710] Linking static target lib/librte_hash.a 00:02:09.503 [283/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.503 [284/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.503 [285/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.503 [286/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:09.503 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.771 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:09.771 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:02:09.771 [290/710] Linking static target lib/librte_acl.a 00:02:09.771 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.771 [292/710] Linking static target lib/librte_power.a 00:02:09.771 [293/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.771 [294/710] Linking static target lib/librte_reorder.a 00:02:09.771 [295/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.771 [296/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.038 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.038 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.038 [299/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.038 [300/710] Linking static target lib/librte_security.a 00:02:10.038 [301/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.038 [302/710] Linking static target lib/librte_mbuf.a 00:02:10.038 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.307 [304/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:10.307 [305/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.307 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.307 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.307 [308/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:10.307 [309/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.307 [310/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.307 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:10.307 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:10.574 [313/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:10.574 [314/710] Linking static target lib/librte_rib.a 00:02:10.574 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:10.574 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:10.574 [317/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:10.574 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.574 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:10.574 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:10.839 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:10.839 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:10.839 [323/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.839 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:10.839 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:10.839 [326/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:10.839 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.839 [328/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.103 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.103 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:11.103 [331/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.367 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:11.367 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:11.632 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:11.632 [335/710] Linking static target lib/librte_member.a 00:02:11.632 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:11.632 [337/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.632 [338/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:11.632 [339/710] Linking static target lib/librte_eventdev.a 00:02:11.632 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.632 [341/710] Linking static target lib/librte_cryptodev.a 00:02:11.897 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:11.897 [343/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:11.897 [344/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:11.897 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:11.897 [346/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:11.897 [347/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:11.897 [348/710] Linking static target lib/librte_sched.a 00:02:11.897 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:11.897 [350/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.897 [351/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:11.897 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:11.897 [353/710] Linking static target lib/librte_ethdev.a 00:02:12.162 [354/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:12.162 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:12.162 [356/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:12.162 [357/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.162 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:12.162 [359/710] Linking static target lib/librte_fib.a 00:02:12.162 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:12.162 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:12.162 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:12.426 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:12.427 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:12.427 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:12.427 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:12.694 [367/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.694 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:12.694 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.694 [370/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:12.694 [371/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:12.694 [372/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.966 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:12.966 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:12.966 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:12.966 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:12.966 [377/710] Linking static target lib/librte_pdump.a 00:02:13.230 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.230 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:13.230 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:13.230 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:13.230 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:13.230 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:13.230 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:13.230 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:13.499 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.499 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:13.499 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:13.499 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:13.499 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.499 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:13.499 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:13.499 [393/710] Linking static target lib/librte_ipsec.a 00:02:13.499 [394/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:13.763 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:13.763 [396/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.763 [397/710] Linking static target lib/librte_table.a 00:02:13.763 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:14.034 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:14.034 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:14.034 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:14.034 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.299 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:14.564 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.564 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:14.564 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:14.564 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.564 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.564 [409/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:14.564 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:14.564 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.564 [412/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:14.829 [413/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:14.829 [414/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:14.829 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.829 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:15.094 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.094 [418/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.094 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.094 [420/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.094 [421/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:15.094 [422/710] Linking static target lib/librte_port.a 00:02:15.094 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:15.362 [424/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.362 [425/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.362 [426/710] Linking static target drivers/librte_bus_vdev.a 00:02:15.362 [427/710] Linking target lib/librte_eal.so.24.0 00:02:15.362 [428/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:15.362 [429/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.628 [430/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:15.628 [431/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.628 [432/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:15.628 [433/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:15.628 [434/710] Linking target lib/librte_ring.so.24.0 00:02:15.628 [435/710] Linking target lib/librte_timer.so.24.0 00:02:15.628 [436/710] Linking target lib/librte_pci.so.24.0 00:02:15.628 [437/710] Linking target lib/librte_meter.so.24.0 00:02:15.628 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:15.628 [439/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:15.628 [440/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.894 [441/710] Linking target lib/librte_acl.so.24.0 00:02:15.894 [442/710] Linking target lib/librte_dmadev.so.24.0 00:02:15.894 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:02:15.894 [444/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:15.894 [445/710] Linking target lib/librte_jobstats.so.24.0 00:02:15.894 [446/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:15.894 [447/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:15.894 [448/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:15.894 [449/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:15.894 [450/710] Linking target lib/librte_rcu.so.24.0 00:02:15.894 [451/710] Linking static target lib/librte_graph.a 00:02:15.894 [452/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.894 [453/710] Linking target lib/librte_mempool.so.24.0 00:02:15.894 [454/710] Linking target lib/librte_stack.so.24.0 00:02:15.894 [455/710] Linking static target drivers/librte_bus_pci.a 00:02:15.894 [456/710] Linking target lib/librte_rawdev.so.24.0 00:02:15.894 [457/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.894 [458/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.894 [459/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.894 [460/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:15.894 [461/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:16.164 [462/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.164 [463/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:16.164 [464/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:16.164 [465/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:16.164 [466/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:16.164 [467/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:16.164 [468/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:16.164 [469/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:16.435 [470/710] Linking target lib/librte_mbuf.so.24.0 00:02:16.435 [471/710] Linking target lib/librte_rib.so.24.0 00:02:16.436 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:16.436 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.436 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.436 [475/710] Linking static target drivers/librte_mempool_ring.a 00:02:16.436 [476/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.436 [477/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:16.436 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:16.436 [479/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:16.436 [480/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:16.436 [481/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:16.436 [482/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:16.436 [483/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:16.697 [484/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:16.697 [485/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:16.697 [486/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:16.697 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:16.697 [488/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:16.697 [489/710] Linking target lib/librte_net.so.24.0 00:02:16.697 [490/710] Linking target lib/librte_bbdev.so.24.0 00:02:16.697 [491/710] Linking target lib/librte_compressdev.so.24.0 00:02:16.697 [492/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:16.697 [493/710] Linking target lib/librte_distributor.so.24.0 00:02:16.697 [494/710] Linking target lib/librte_cryptodev.so.24.0 00:02:16.697 [495/710] Linking target lib/librte_gpudev.so.24.0 00:02:16.697 [496/710] Linking target lib/librte_regexdev.so.24.0 00:02:16.697 [497/710] Linking target lib/librte_mldev.so.24.0 00:02:16.698 [498/710] Linking target lib/librte_reorder.so.24.0 00:02:16.698 [499/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:16.698 [500/710] Linking target lib/librte_sched.so.24.0 00:02:16.962 [501/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.962 [502/710] Linking target lib/librte_fib.so.24.0 00:02:16.962 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:16.962 [504/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:16.962 [505/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:16.962 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:16.962 [507/710] Linking target lib/librte_cmdline.so.24.0 00:02:16.962 [508/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.962 [509/710] Linking target lib/librte_hash.so.24.0 00:02:16.963 [510/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:16.963 [511/710] Linking target lib/librte_security.so.24.0 00:02:16.963 [512/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:17.228 [513/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:17.228 [514/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:17.228 [515/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:17.228 [516/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:17.228 [517/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:17.228 [518/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:17.228 [519/710] Linking target lib/librte_efd.so.24.0 00:02:17.228 [520/710] Linking target lib/librte_lpm.so.24.0 00:02:17.228 [521/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:17.494 [522/710] Linking target lib/librte_member.so.24.0 00:02:17.494 [523/710] Linking target lib/librte_ipsec.so.24.0 00:02:17.494 [524/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:17.494 [525/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:17.494 [526/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:17.764 [527/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:17.764 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:17.764 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:17.764 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:17.764 [531/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:17.764 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:18.028 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:18.028 [534/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:18.028 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:18.028 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:18.028 [537/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:18.298 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:18.298 [539/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:18.298 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:18.298 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:18.563 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:18.563 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:18.828 [544/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:18.828 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:18.828 [546/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:18.828 [547/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:18.828 [548/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:18.828 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:19.091 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:19.091 [551/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:19.091 [552/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:19.091 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:19.091 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:19.091 [555/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:19.355 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:19.355 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:19.355 [558/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:19.355 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:19.622 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:19.887 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:20.154 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:20.154 [563/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.154 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:20.154 [565/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:20.154 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:20.154 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:20.418 [568/710] Linking target lib/librte_ethdev.so.24.0 00:02:20.418 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:20.418 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:20.418 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:20.418 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:20.418 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:20.418 [574/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:20.686 [575/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:20.686 [576/710] Linking target lib/librte_metrics.so.24.0 00:02:20.686 [577/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:20.686 [578/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:20.686 [579/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:20.686 [580/710] Linking target lib/librte_bpf.so.24.0 00:02:20.686 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:20.686 [582/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:20.686 [583/710] Linking target lib/librte_gro.so.24.0 00:02:20.950 [584/710] Linking target lib/librte_eventdev.so.24.0 00:02:20.950 [585/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:20.950 [586/710] Linking target lib/librte_gso.so.24.0 00:02:20.950 [587/710] Linking target lib/librte_ip_frag.so.24.0 00:02:20.950 [588/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:20.950 [589/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:20.950 [590/710] Linking target lib/librte_pcapng.so.24.0 00:02:20.950 [591/710] Linking target lib/librte_power.so.24.0 00:02:20.950 [592/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:20.950 [593/710] Linking static target lib/librte_pdcp.a 00:02:20.950 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:02:20.950 [595/710] Linking target lib/librte_latencystats.so.24.0 00:02:20.950 [596/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:20.950 [597/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:20.950 [598/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:21.223 [599/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:21.223 [600/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:21.223 [601/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:21.223 [602/710] Linking target lib/librte_dispatcher.so.24.0 00:02:21.223 [603/710] Linking target lib/librte_pdump.so.24.0 00:02:21.223 [604/710] Linking target lib/librte_port.so.24.0 00:02:21.223 [605/710] Linking target lib/librte_graph.so.24.0 00:02:21.223 [606/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:21.223 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:21.223 [608/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:21.487 [609/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:21.487 [610/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.487 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:21.487 [612/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:21.487 [613/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:21.487 [614/710] Linking target lib/librte_pdcp.so.24.0 00:02:21.487 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:21.487 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:21.750 [617/710] Linking target lib/librte_table.so.24.0 00:02:21.750 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:21.750 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:21.750 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:22.015 [621/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:22.015 [622/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:22.015 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:22.015 [624/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:22.015 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:22.015 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:22.015 [627/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:22.015 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:22.276 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:22.276 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:22.536 [631/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:22.803 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:22.804 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:22.804 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:22.804 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:22.804 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:22.804 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:22.804 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:23.064 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:23.064 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:23.064 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:23.064 [642/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:23.064 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:23.324 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:23.324 [645/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:23.324 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:23.324 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:23.324 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:23.584 [649/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:23.584 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:23.584 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:23.584 [652/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:23.584 [653/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:23.584 [654/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:23.844 [655/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:23.844 [656/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:23.844 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:24.108 [658/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:24.108 [659/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:24.108 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:24.108 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.108 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.108 [663/710] Linking static target drivers/librte_net_i40e.a 00:02:24.367 [664/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:24.627 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:24.627 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.886 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:24.886 [668/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:24.886 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:25.145 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:25.712 [671/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:25.712 [672/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:25.971 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:26.242 [674/710] Linking static target lib/librte_node.a 00:02:26.508 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.508 [676/710] Linking target lib/librte_node.so.24.0 00:02:27.077 [677/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:27.336 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:27.595 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:29.500 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:29.760 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:35.034 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.119 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.119 [684/710] Linking static target lib/librte_vhost.a 00:03:07.119 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.119 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:17.128 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:17.128 [688/710] Linking static target lib/librte_pipeline.a 00:03:17.128 [689/710] Linking target app/dpdk-pdump 00:03:17.128 [690/710] Linking target app/dpdk-dumpcap 00:03:17.128 [691/710] Linking target app/dpdk-test-acl 00:03:17.128 [692/710] Linking target app/dpdk-test-fib 00:03:17.128 [693/710] Linking target app/dpdk-test-pipeline 00:03:17.128 [694/710] Linking target app/dpdk-test-security-perf 00:03:17.128 [695/710] Linking target app/dpdk-test-cmdline 00:03:17.128 [696/710] Linking target app/dpdk-test-bbdev 00:03:17.128 [697/710] Linking target app/dpdk-test-gpudev 00:03:17.128 [698/710] Linking target app/dpdk-proc-info 00:03:17.128 [699/710] Linking target app/dpdk-test-eventdev 00:03:17.128 [700/710] Linking target app/dpdk-test-regex 00:03:17.128 [701/710] Linking target app/dpdk-test-sad 00:03:17.128 [702/710] Linking target app/dpdk-test-flow-perf 00:03:17.128 [703/710] Linking target app/dpdk-graph 00:03:17.128 [704/710] Linking target app/dpdk-test-crypto-perf 00:03:17.128 [705/710] Linking target app/dpdk-test-dma-perf 00:03:17.128 [706/710] Linking target app/dpdk-test-mldev 00:03:17.128 [707/710] Linking target app/dpdk-test-compress-perf 00:03:17.402 [708/710] Linking target app/dpdk-testpmd 00:03:19.314 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.314 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:19.314 13:14:11 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:19.314 13:14:11 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:19.314 13:14:11 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:19.573 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:19.573 [0/1] Installing files. 00:03:19.837 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.837 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:19.838 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.839 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.840 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.841 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:19.842 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:19.842 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.842 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:19.843 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:20.412 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:20.412 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:20.412 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.412 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:20.412 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.412 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.413 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.676 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.677 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.678 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.679 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.679 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:20.679 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.679 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:20.679 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:20.679 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:20.679 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:20.679 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:20.679 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:20.679 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:20.679 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:20.679 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:20.679 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:20.679 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:20.679 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:20.679 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:20.679 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:20.679 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:20.679 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:20.679 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:20.679 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:20.679 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:20.679 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:20.679 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:20.679 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:20.679 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:20.679 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:20.679 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:20.679 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:20.679 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:20.679 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:20.679 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:20.679 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:20.679 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:20.679 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:20.679 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:20.679 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:20.679 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:20.679 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:20.679 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:20.679 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:20.679 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:20.679 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:20.679 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:20.679 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:20.679 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:20.679 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:20.679 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:20.679 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:20.679 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:20.679 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:20.679 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:20.679 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:20.679 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:20.679 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:20.679 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:20.679 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:20.679 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:20.679 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:20.679 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:20.679 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:20.679 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:20.679 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:20.679 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:20.679 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:20.679 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:20.679 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:20.679 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:20.679 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:20.679 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:20.679 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:20.679 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:20.679 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:20.679 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:20.679 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:20.679 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:20.679 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:20.679 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:20.679 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:20.679 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:20.679 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:20.679 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:20.679 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:20.679 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:20.679 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:20.679 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:20.679 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:20.679 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:20.679 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:20.679 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:20.679 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:20.679 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:20.679 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:20.679 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:20.679 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:20.679 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:20.679 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:20.679 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:20.679 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:20.679 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:20.679 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:20.679 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:20.679 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:20.680 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:20.680 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:20.680 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:20.680 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:20.680 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:20.680 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:20.680 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:20.680 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:20.680 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:20.680 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:20.680 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:20.680 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:20.680 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:20.680 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:20.680 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:20.680 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:20.680 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:20.680 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:20.680 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:20.680 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:20.680 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:20.680 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:20.680 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:20.680 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:20.680 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:20.680 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:20.680 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:20.680 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:20.680 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:20.680 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:20.680 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:20.680 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:20.680 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:20.680 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:20.680 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:20.680 13:14:12 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:20.680 13:14:12 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.680 00:03:20.680 real 1m27.667s 00:03:20.680 user 18m0.695s 00:03:20.680 sys 2m11.707s 00:03:20.680 13:14:12 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:20.680 13:14:12 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:20.680 ************************************ 00:03:20.680 END TEST build_native_dpdk 00:03:20.680 ************************************ 00:03:20.680 13:14:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.680 13:14:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.680 13:14:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:20.680 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:20.939 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:20.939 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:20.939 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:21.198 Using 'verbs' RDMA provider 00:03:32.135 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:42.139 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:42.140 Creating mk/config.mk...done. 00:03:42.140 Creating mk/cc.flags.mk...done. 00:03:42.140 Type 'make' to build. 00:03:42.140 13:14:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:42.140 13:14:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:42.140 13:14:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:42.140 13:14:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.140 ************************************ 00:03:42.140 START TEST make 00:03:42.140 ************************************ 00:03:42.140 13:14:33 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:42.140 make[1]: Nothing to be done for 'all'. 00:03:43.537 The Meson build system 00:03:43.537 Version: 1.5.0 00:03:43.537 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:43.537 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:43.537 Build type: native build 00:03:43.537 Project name: libvfio-user 00:03:43.537 Project version: 0.0.1 00:03:43.537 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:43.537 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:43.537 Host machine cpu family: x86_64 00:03:43.537 Host machine cpu: x86_64 00:03:43.537 Run-time dependency threads found: YES 00:03:43.537 Library dl found: YES 00:03:43.537 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:43.537 Run-time dependency json-c found: YES 0.17 00:03:43.537 Run-time dependency cmocka found: YES 1.1.7 00:03:43.537 Program pytest-3 found: NO 00:03:43.537 Program flake8 found: NO 00:03:43.537 Program misspell-fixer found: NO 00:03:43.537 Program restructuredtext-lint found: NO 00:03:43.537 Program valgrind found: YES (/usr/bin/valgrind) 00:03:43.537 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:43.537 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:43.537 Compiler for C supports arguments -Wwrite-strings: YES 00:03:43.537 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:43.537 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:43.537 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:43.537 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:43.537 Build targets in project: 8 00:03:43.537 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:43.537 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:43.537 00:03:43.537 libvfio-user 0.0.1 00:03:43.537 00:03:43.537 User defined options 00:03:43.537 buildtype : debug 00:03:43.537 default_library: shared 00:03:43.537 libdir : /usr/local/lib 00:03:43.537 00:03:43.537 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:44.489 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:44.764 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:44.764 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:44.764 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:44.764 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:44.764 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:44.764 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:44.764 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:44.764 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:44.764 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:44.764 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:44.764 [11/37] Compiling C object samples/null.p/null.c.o 00:03:44.764 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:44.764 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:44.764 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:44.764 [15/37] Compiling C object samples/server.p/server.c.o 00:03:44.764 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:44.764 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:44.764 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:44.764 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:44.764 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:44.764 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:44.764 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:44.764 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:44.764 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:45.030 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:45.030 [26/37] Compiling C object samples/client.p/client.c.o 00:03:45.030 [27/37] Linking target samples/client 00:03:45.030 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:45.030 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:45.030 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:45.030 [31/37] Linking target test/unit_tests 00:03:45.292 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:45.292 [33/37] Linking target samples/server 00:03:45.292 [34/37] Linking target samples/gpio-pci-idio-16 00:03:45.292 [35/37] Linking target samples/lspci 00:03:45.292 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:45.292 [37/37] Linking target samples/null 00:03:45.292 INFO: autodetecting backend as ninja 00:03:45.292 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:45.557 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:46.141 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:46.141 ninja: no work to do. 00:04:24.863 CC lib/ut_mock/mock.o 00:04:24.863 CC lib/ut/ut.o 00:04:24.863 CC lib/log/log.o 00:04:24.863 CC lib/log/log_flags.o 00:04:24.863 CC lib/log/log_deprecated.o 00:04:24.863 LIB libspdk_ut.a 00:04:24.863 LIB libspdk_ut_mock.a 00:04:24.863 LIB libspdk_log.a 00:04:24.863 SO libspdk_ut.so.2.0 00:04:24.863 SO libspdk_ut_mock.so.6.0 00:04:24.863 SO libspdk_log.so.7.1 00:04:24.863 SYMLINK libspdk_ut_mock.so 00:04:24.863 SYMLINK libspdk_ut.so 00:04:24.863 SYMLINK libspdk_log.so 00:04:24.863 CC lib/ioat/ioat.o 00:04:24.863 CXX lib/trace_parser/trace.o 00:04:24.863 CC lib/dma/dma.o 00:04:24.863 CC lib/util/base64.o 00:04:24.863 CC lib/util/bit_array.o 00:04:24.863 CC lib/util/cpuset.o 00:04:24.863 CC lib/util/crc16.o 00:04:24.863 CC lib/util/crc32.o 00:04:24.863 CC lib/util/crc32c.o 00:04:24.863 CC lib/util/crc32_ieee.o 00:04:24.863 CC lib/util/crc64.o 00:04:24.863 CC lib/util/dif.o 00:04:24.863 CC lib/util/fd.o 00:04:24.863 CC lib/util/fd_group.o 00:04:24.863 CC lib/util/file.o 00:04:24.863 CC lib/util/hexlify.o 00:04:24.863 CC lib/util/iov.o 00:04:24.863 CC lib/util/math.o 00:04:24.863 CC lib/util/net.o 00:04:24.863 CC lib/util/pipe.o 00:04:24.863 CC lib/util/strerror_tls.o 00:04:24.863 CC lib/util/uuid.o 00:04:24.863 CC lib/util/string.o 00:04:24.863 CC lib/util/xor.o 00:04:24.863 CC lib/util/md5.o 00:04:24.863 CC lib/util/zipf.o 00:04:24.863 CC lib/vfio_user/host/vfio_user_pci.o 00:04:24.863 CC lib/vfio_user/host/vfio_user.o 00:04:24.863 LIB libspdk_dma.a 00:04:24.863 SO libspdk_dma.so.5.0 00:04:24.863 SYMLINK libspdk_dma.so 00:04:24.863 LIB libspdk_ioat.a 00:04:24.863 SO libspdk_ioat.so.7.0 00:04:24.863 LIB libspdk_vfio_user.a 00:04:24.863 SYMLINK libspdk_ioat.so 00:04:24.863 SO libspdk_vfio_user.so.5.0 00:04:24.863 SYMLINK libspdk_vfio_user.so 00:04:24.863 LIB libspdk_util.a 00:04:24.863 SO libspdk_util.so.10.0 00:04:24.863 SYMLINK libspdk_util.so 00:04:24.863 CC lib/rdma_utils/rdma_utils.o 00:04:24.863 CC lib/json/json_parse.o 00:04:24.863 CC lib/conf/conf.o 00:04:24.863 CC lib/json/json_util.o 00:04:24.863 CC lib/rdma_provider/common.o 00:04:24.863 CC lib/idxd/idxd.o 00:04:24.863 CC lib/vmd/vmd.o 00:04:24.863 CC lib/json/json_write.o 00:04:24.863 CC lib/idxd/idxd_user.o 00:04:24.863 CC lib/env_dpdk/env.o 00:04:24.863 CC lib/vmd/led.o 00:04:24.863 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:24.863 CC lib/env_dpdk/memory.o 00:04:24.863 CC lib/idxd/idxd_kernel.o 00:04:24.863 CC lib/env_dpdk/pci.o 00:04:24.863 CC lib/env_dpdk/threads.o 00:04:24.863 CC lib/env_dpdk/pci_ioat.o 00:04:24.863 CC lib/env_dpdk/init.o 00:04:24.863 CC lib/env_dpdk/pci_virtio.o 00:04:24.863 CC lib/env_dpdk/pci_vmd.o 00:04:24.863 CC lib/env_dpdk/pci_idxd.o 00:04:24.863 CC lib/env_dpdk/pci_event.o 00:04:24.863 CC lib/env_dpdk/sigbus_handler.o 00:04:24.863 CC lib/env_dpdk/pci_dpdk.o 00:04:24.863 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:24.863 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:24.863 LIB libspdk_rdma_provider.a 00:04:24.863 SO libspdk_rdma_provider.so.6.0 00:04:24.863 LIB libspdk_conf.a 00:04:24.863 SO libspdk_conf.so.6.0 00:04:24.863 SYMLINK libspdk_rdma_provider.so 00:04:24.863 LIB libspdk_rdma_utils.a 00:04:24.863 SYMLINK libspdk_conf.so 00:04:24.863 LIB libspdk_json.a 00:04:24.863 SO libspdk_rdma_utils.so.1.0 00:04:24.863 SO libspdk_json.so.6.0 00:04:24.863 SYMLINK libspdk_rdma_utils.so 00:04:24.863 SYMLINK libspdk_json.so 00:04:24.863 CC lib/jsonrpc/jsonrpc_server.o 00:04:24.863 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:24.863 CC lib/jsonrpc/jsonrpc_client.o 00:04:24.864 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:24.864 LIB libspdk_idxd.a 00:04:24.864 SO libspdk_idxd.so.12.1 00:04:24.864 LIB libspdk_vmd.a 00:04:24.864 SO libspdk_vmd.so.6.0 00:04:24.864 SYMLINK libspdk_idxd.so 00:04:24.864 SYMLINK libspdk_vmd.so 00:04:25.123 LIB libspdk_jsonrpc.a 00:04:25.123 SO libspdk_jsonrpc.so.6.0 00:04:25.123 SYMLINK libspdk_jsonrpc.so 00:04:25.123 LIB libspdk_trace_parser.a 00:04:25.123 SO libspdk_trace_parser.so.6.0 00:04:25.383 SYMLINK libspdk_trace_parser.so 00:04:25.383 CC lib/rpc/rpc.o 00:04:25.383 LIB libspdk_rpc.a 00:04:25.641 SO libspdk_rpc.so.6.0 00:04:25.641 SYMLINK libspdk_rpc.so 00:04:25.642 CC lib/trace/trace.o 00:04:25.642 CC lib/notify/notify.o 00:04:25.642 CC lib/trace/trace_flags.o 00:04:25.642 CC lib/trace/trace_rpc.o 00:04:25.642 CC lib/notify/notify_rpc.o 00:04:25.642 CC lib/keyring/keyring.o 00:04:25.642 CC lib/keyring/keyring_rpc.o 00:04:25.900 LIB libspdk_notify.a 00:04:25.900 SO libspdk_notify.so.6.0 00:04:25.900 SYMLINK libspdk_notify.so 00:04:25.900 LIB libspdk_keyring.a 00:04:25.900 LIB libspdk_trace.a 00:04:25.900 SO libspdk_keyring.so.2.0 00:04:26.159 SO libspdk_trace.so.11.0 00:04:26.159 SYMLINK libspdk_keyring.so 00:04:26.159 SYMLINK libspdk_trace.so 00:04:26.159 CC lib/sock/sock.o 00:04:26.159 CC lib/sock/sock_rpc.o 00:04:26.159 CC lib/thread/thread.o 00:04:26.159 CC lib/thread/iobuf.o 00:04:26.159 LIB libspdk_env_dpdk.a 00:04:26.417 SO libspdk_env_dpdk.so.15.0 00:04:26.417 SYMLINK libspdk_env_dpdk.so 00:04:26.676 LIB libspdk_sock.a 00:04:26.676 SO libspdk_sock.so.10.0 00:04:26.676 SYMLINK libspdk_sock.so 00:04:26.935 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:26.935 CC lib/nvme/nvme_ctrlr.o 00:04:26.935 CC lib/nvme/nvme_fabric.o 00:04:26.935 CC lib/nvme/nvme_ns_cmd.o 00:04:26.935 CC lib/nvme/nvme_ns.o 00:04:26.935 CC lib/nvme/nvme_pcie_common.o 00:04:26.935 CC lib/nvme/nvme_pcie.o 00:04:26.935 CC lib/nvme/nvme_qpair.o 00:04:26.935 CC lib/nvme/nvme.o 00:04:26.935 CC lib/nvme/nvme_quirks.o 00:04:26.935 CC lib/nvme/nvme_transport.o 00:04:26.935 CC lib/nvme/nvme_discovery.o 00:04:26.935 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:26.935 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:26.935 CC lib/nvme/nvme_tcp.o 00:04:26.935 CC lib/nvme/nvme_opal.o 00:04:26.935 CC lib/nvme/nvme_io_msg.o 00:04:26.935 CC lib/nvme/nvme_poll_group.o 00:04:26.935 CC lib/nvme/nvme_zns.o 00:04:26.935 CC lib/nvme/nvme_stubs.o 00:04:26.935 CC lib/nvme/nvme_auth.o 00:04:26.935 CC lib/nvme/nvme_cuse.o 00:04:26.935 CC lib/nvme/nvme_vfio_user.o 00:04:26.935 CC lib/nvme/nvme_rdma.o 00:04:27.874 LIB libspdk_thread.a 00:04:27.874 SO libspdk_thread.so.10.2 00:04:27.874 SYMLINK libspdk_thread.so 00:04:28.133 CC lib/accel/accel.o 00:04:28.133 CC lib/accel/accel_rpc.o 00:04:28.133 CC lib/accel/accel_sw.o 00:04:28.133 CC lib/fsdev/fsdev.o 00:04:28.133 CC lib/fsdev/fsdev_io.o 00:04:28.133 CC lib/fsdev/fsdev_rpc.o 00:04:28.133 CC lib/vfu_tgt/tgt_endpoint.o 00:04:28.133 CC lib/blob/blobstore.o 00:04:28.133 CC lib/init/json_config.o 00:04:28.133 CC lib/virtio/virtio.o 00:04:28.133 CC lib/vfu_tgt/tgt_rpc.o 00:04:28.133 CC lib/blob/request.o 00:04:28.133 CC lib/virtio/virtio_vhost_user.o 00:04:28.133 CC lib/init/subsystem.o 00:04:28.133 CC lib/virtio/virtio_vfio_user.o 00:04:28.133 CC lib/init/subsystem_rpc.o 00:04:28.133 CC lib/blob/zeroes.o 00:04:28.133 CC lib/virtio/virtio_pci.o 00:04:28.133 CC lib/blob/blob_bs_dev.o 00:04:28.133 CC lib/init/rpc.o 00:04:28.392 LIB libspdk_init.a 00:04:28.392 SO libspdk_init.so.6.0 00:04:28.392 LIB libspdk_vfu_tgt.a 00:04:28.392 SYMLINK libspdk_init.so 00:04:28.392 LIB libspdk_virtio.a 00:04:28.392 SO libspdk_vfu_tgt.so.3.0 00:04:28.651 SO libspdk_virtio.so.7.0 00:04:28.651 SYMLINK libspdk_vfu_tgt.so 00:04:28.651 SYMLINK libspdk_virtio.so 00:04:28.651 CC lib/event/app.o 00:04:28.651 CC lib/event/reactor.o 00:04:28.651 CC lib/event/log_rpc.o 00:04:28.651 CC lib/event/app_rpc.o 00:04:28.651 CC lib/event/scheduler_static.o 00:04:28.910 LIB libspdk_fsdev.a 00:04:28.910 SO libspdk_fsdev.so.1.0 00:04:28.910 SYMLINK libspdk_fsdev.so 00:04:29.170 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:29.170 LIB libspdk_event.a 00:04:29.170 SO libspdk_event.so.15.0 00:04:29.170 SYMLINK libspdk_event.so 00:04:29.429 LIB libspdk_accel.a 00:04:29.429 LIB libspdk_nvme.a 00:04:29.429 SO libspdk_accel.so.16.0 00:04:29.429 SYMLINK libspdk_accel.so 00:04:29.429 SO libspdk_nvme.so.14.0 00:04:29.687 CC lib/bdev/bdev.o 00:04:29.687 CC lib/bdev/bdev_rpc.o 00:04:29.687 CC lib/bdev/bdev_zone.o 00:04:29.687 CC lib/bdev/part.o 00:04:29.687 CC lib/bdev/scsi_nvme.o 00:04:29.687 SYMLINK libspdk_nvme.so 00:04:29.687 LIB libspdk_fuse_dispatcher.a 00:04:29.687 SO libspdk_fuse_dispatcher.so.1.0 00:04:29.946 SYMLINK libspdk_fuse_dispatcher.so 00:04:31.324 LIB libspdk_blob.a 00:04:31.324 SO libspdk_blob.so.11.0 00:04:31.324 SYMLINK libspdk_blob.so 00:04:31.583 CC lib/lvol/lvol.o 00:04:31.583 CC lib/blobfs/blobfs.o 00:04:31.583 CC lib/blobfs/tree.o 00:04:32.151 LIB libspdk_bdev.a 00:04:32.151 SO libspdk_bdev.so.17.0 00:04:32.151 LIB libspdk_blobfs.a 00:04:32.416 SO libspdk_blobfs.so.10.0 00:04:32.416 SYMLINK libspdk_bdev.so 00:04:32.416 SYMLINK libspdk_blobfs.so 00:04:32.416 LIB libspdk_lvol.a 00:04:32.416 SO libspdk_lvol.so.10.0 00:04:32.416 SYMLINK libspdk_lvol.so 00:04:32.416 CC lib/nbd/nbd.o 00:04:32.416 CC lib/nvmf/ctrlr.o 00:04:32.416 CC lib/nbd/nbd_rpc.o 00:04:32.416 CC lib/nvmf/ctrlr_discovery.o 00:04:32.416 CC lib/nvmf/ctrlr_bdev.o 00:04:32.416 CC lib/nvmf/subsystem.o 00:04:32.416 CC lib/ublk/ublk.o 00:04:32.416 CC lib/nvmf/nvmf.o 00:04:32.416 CC lib/ublk/ublk_rpc.o 00:04:32.416 CC lib/scsi/dev.o 00:04:32.416 CC lib/nvmf/nvmf_rpc.o 00:04:32.416 CC lib/scsi/lun.o 00:04:32.416 CC lib/nvmf/transport.o 00:04:32.416 CC lib/ftl/ftl_core.o 00:04:32.416 CC lib/scsi/port.o 00:04:32.416 CC lib/ftl/ftl_init.o 00:04:32.416 CC lib/scsi/scsi.o 00:04:32.416 CC lib/nvmf/tcp.o 00:04:32.416 CC lib/ftl/ftl_layout.o 00:04:32.416 CC lib/nvmf/stubs.o 00:04:32.416 CC lib/scsi/scsi_bdev.o 00:04:32.416 CC lib/ftl/ftl_debug.o 00:04:32.416 CC lib/nvmf/vfio_user.o 00:04:32.416 CC lib/nvmf/mdns_server.o 00:04:32.416 CC lib/ftl/ftl_io.o 00:04:32.416 CC lib/scsi/scsi_pr.o 00:04:32.416 CC lib/nvmf/rdma.o 00:04:32.416 CC lib/scsi/scsi_rpc.o 00:04:32.416 CC lib/ftl/ftl_sb.o 00:04:32.416 CC lib/ftl/ftl_l2p.o 00:04:32.416 CC lib/scsi/task.o 00:04:32.416 CC lib/ftl/ftl_l2p_flat.o 00:04:32.416 CC lib/nvmf/auth.o 00:04:32.416 CC lib/ftl/ftl_nv_cache.o 00:04:32.416 CC lib/ftl/ftl_band.o 00:04:32.416 CC lib/ftl/ftl_band_ops.o 00:04:32.416 CC lib/ftl/ftl_writer.o 00:04:32.416 CC lib/ftl/ftl_rq.o 00:04:32.416 CC lib/ftl/ftl_reloc.o 00:04:32.416 CC lib/ftl/ftl_l2p_cache.o 00:04:32.416 CC lib/ftl/ftl_p2l.o 00:04:32.416 CC lib/ftl/ftl_p2l_log.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.416 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:32.989 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:32.989 CC lib/ftl/utils/ftl_conf.o 00:04:32.989 CC lib/ftl/utils/ftl_md.o 00:04:32.989 CC lib/ftl/utils/ftl_mempool.o 00:04:32.989 CC lib/ftl/utils/ftl_bitmap.o 00:04:32.989 CC lib/ftl/utils/ftl_property.o 00:04:32.989 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:32.989 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:32.989 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:32.989 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:32.989 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:32.989 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.256 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:33.257 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.257 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.257 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.257 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.257 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:33.257 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:33.257 CC lib/ftl/base/ftl_base_dev.o 00:04:33.257 CC lib/ftl/base/ftl_base_bdev.o 00:04:33.257 CC lib/ftl/ftl_trace.o 00:04:33.517 LIB libspdk_nbd.a 00:04:33.517 SO libspdk_nbd.so.7.0 00:04:33.517 LIB libspdk_scsi.a 00:04:33.517 SYMLINK libspdk_nbd.so 00:04:33.517 SO libspdk_scsi.so.9.0 00:04:33.517 SYMLINK libspdk_scsi.so 00:04:33.776 LIB libspdk_ublk.a 00:04:33.776 SO libspdk_ublk.so.3.0 00:04:33.776 SYMLINK libspdk_ublk.so 00:04:33.776 CC lib/iscsi/conn.o 00:04:33.776 CC lib/vhost/vhost.o 00:04:33.776 CC lib/iscsi/init_grp.o 00:04:33.776 CC lib/vhost/vhost_rpc.o 00:04:33.776 CC lib/iscsi/iscsi.o 00:04:33.776 CC lib/vhost/vhost_scsi.o 00:04:33.776 CC lib/iscsi/param.o 00:04:33.776 CC lib/vhost/vhost_blk.o 00:04:33.776 CC lib/iscsi/portal_grp.o 00:04:33.776 CC lib/vhost/rte_vhost_user.o 00:04:33.776 CC lib/iscsi/tgt_node.o 00:04:33.776 CC lib/iscsi/iscsi_subsystem.o 00:04:33.776 CC lib/iscsi/iscsi_rpc.o 00:04:33.776 CC lib/iscsi/task.o 00:04:34.036 LIB libspdk_ftl.a 00:04:34.296 SO libspdk_ftl.so.9.0 00:04:34.554 SYMLINK libspdk_ftl.so 00:04:35.122 LIB libspdk_vhost.a 00:04:35.122 SO libspdk_vhost.so.8.0 00:04:35.122 SYMLINK libspdk_vhost.so 00:04:35.122 LIB libspdk_nvmf.a 00:04:35.122 SO libspdk_nvmf.so.19.0 00:04:35.122 LIB libspdk_iscsi.a 00:04:35.381 SO libspdk_iscsi.so.8.0 00:04:35.381 SYMLINK libspdk_iscsi.so 00:04:35.381 SYMLINK libspdk_nvmf.so 00:04:35.640 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.640 CC module/vfu_device/vfu_virtio.o 00:04:35.640 CC module/vfu_device/vfu_virtio_blk.o 00:04:35.640 CC module/vfu_device/vfu_virtio_scsi.o 00:04:35.640 CC module/vfu_device/vfu_virtio_rpc.o 00:04:35.640 CC module/vfu_device/vfu_virtio_fs.o 00:04:35.898 CC module/sock/posix/posix.o 00:04:35.898 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.898 CC module/accel/dsa/accel_dsa.o 00:04:35.898 CC module/scheduler/gscheduler/gscheduler.o 00:04:35.898 CC module/keyring/linux/keyring.o 00:04:35.898 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.898 CC module/accel/error/accel_error.o 00:04:35.898 CC module/keyring/file/keyring.o 00:04:35.898 CC module/keyring/linux/keyring_rpc.o 00:04:35.899 CC module/accel/error/accel_error_rpc.o 00:04:35.899 CC module/keyring/file/keyring_rpc.o 00:04:35.899 CC module/accel/iaa/accel_iaa.o 00:04:35.899 CC module/accel/ioat/accel_ioat.o 00:04:35.899 CC module/blob/bdev/blob_bdev.o 00:04:35.899 CC module/accel/iaa/accel_iaa_rpc.o 00:04:35.899 CC module/accel/ioat/accel_ioat_rpc.o 00:04:35.899 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:35.899 CC module/fsdev/aio/fsdev_aio.o 00:04:35.899 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:35.899 CC module/fsdev/aio/linux_aio_mgr.o 00:04:35.899 LIB libspdk_env_dpdk_rpc.a 00:04:35.899 SO libspdk_env_dpdk_rpc.so.6.0 00:04:35.899 LIB libspdk_keyring_linux.a 00:04:35.899 LIB libspdk_keyring_file.a 00:04:35.899 LIB libspdk_scheduler_dpdk_governor.a 00:04:35.899 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.899 SO libspdk_keyring_linux.so.1.0 00:04:35.899 LIB libspdk_scheduler_gscheduler.a 00:04:35.899 SO libspdk_keyring_file.so.2.0 00:04:35.899 LIB libspdk_accel_error.a 00:04:35.899 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:35.899 SO libspdk_scheduler_gscheduler.so.4.0 00:04:36.157 LIB libspdk_accel_ioat.a 00:04:36.157 LIB libspdk_scheduler_dynamic.a 00:04:36.157 SO libspdk_accel_error.so.2.0 00:04:36.157 SYMLINK libspdk_keyring_linux.so 00:04:36.157 SO libspdk_scheduler_dynamic.so.4.0 00:04:36.157 SYMLINK libspdk_keyring_file.so 00:04:36.157 SO libspdk_accel_ioat.so.6.0 00:04:36.157 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:36.157 SYMLINK libspdk_scheduler_gscheduler.so 00:04:36.157 SYMLINK libspdk_accel_error.so 00:04:36.157 SYMLINK libspdk_scheduler_dynamic.so 00:04:36.157 SYMLINK libspdk_accel_ioat.so 00:04:36.157 LIB libspdk_accel_dsa.a 00:04:36.157 LIB libspdk_blob_bdev.a 00:04:36.157 LIB libspdk_accel_iaa.a 00:04:36.158 SO libspdk_blob_bdev.so.11.0 00:04:36.158 SO libspdk_accel_dsa.so.5.0 00:04:36.158 SO libspdk_accel_iaa.so.3.0 00:04:36.158 SYMLINK libspdk_blob_bdev.so 00:04:36.158 SYMLINK libspdk_accel_dsa.so 00:04:36.158 SYMLINK libspdk_accel_iaa.so 00:04:36.417 LIB libspdk_vfu_device.a 00:04:36.417 SO libspdk_vfu_device.so.3.0 00:04:36.417 CC module/bdev/null/bdev_null.o 00:04:36.417 CC module/blobfs/bdev/blobfs_bdev.o 00:04:36.417 CC module/bdev/raid/bdev_raid.o 00:04:36.417 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:36.417 CC module/bdev/null/bdev_null_rpc.o 00:04:36.417 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.417 CC module/bdev/raid/raid0.o 00:04:36.417 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.417 CC module/bdev/aio/bdev_aio.o 00:04:36.417 CC module/bdev/raid/raid1.o 00:04:36.417 CC module/bdev/gpt/gpt.o 00:04:36.417 CC module/bdev/gpt/vbdev_gpt.o 00:04:36.417 CC module/bdev/raid/concat.o 00:04:36.417 CC module/bdev/malloc/bdev_malloc.o 00:04:36.417 CC module/bdev/aio/bdev_aio_rpc.o 00:04:36.417 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.417 CC module/bdev/error/vbdev_error.o 00:04:36.417 CC module/bdev/delay/vbdev_delay.o 00:04:36.417 CC module/bdev/error/vbdev_error_rpc.o 00:04:36.417 CC module/bdev/lvol/vbdev_lvol.o 00:04:36.417 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.417 CC module/bdev/iscsi/bdev_iscsi.o 00:04:36.417 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.417 CC module/bdev/nvme/bdev_nvme.o 00:04:36.417 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.417 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.417 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.417 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:36.417 CC module/bdev/nvme/nvme_rpc.o 00:04:36.417 CC module/bdev/split/vbdev_split.o 00:04:36.417 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.417 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.417 CC module/bdev/ftl/bdev_ftl.o 00:04:36.417 CC module/bdev/nvme/vbdev_opal.o 00:04:36.417 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:36.417 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.417 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:36.417 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.417 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:36.417 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:36.418 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.418 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:36.676 SYMLINK libspdk_vfu_device.so 00:04:36.676 LIB libspdk_fsdev_aio.a 00:04:36.676 SO libspdk_fsdev_aio.so.1.0 00:04:36.676 LIB libspdk_sock_posix.a 00:04:36.676 SO libspdk_sock_posix.so.6.0 00:04:36.676 SYMLINK libspdk_fsdev_aio.so 00:04:36.935 LIB libspdk_blobfs_bdev.a 00:04:36.935 SYMLINK libspdk_sock_posix.so 00:04:36.935 SO libspdk_blobfs_bdev.so.6.0 00:04:36.935 LIB libspdk_bdev_split.a 00:04:36.935 SYMLINK libspdk_blobfs_bdev.so 00:04:36.935 LIB libspdk_bdev_malloc.a 00:04:36.935 SO libspdk_bdev_split.so.6.0 00:04:36.935 LIB libspdk_bdev_passthru.a 00:04:36.935 SO libspdk_bdev_malloc.so.6.0 00:04:36.935 LIB libspdk_bdev_error.a 00:04:36.935 LIB libspdk_bdev_gpt.a 00:04:36.935 LIB libspdk_bdev_null.a 00:04:36.935 SO libspdk_bdev_passthru.so.6.0 00:04:36.935 SYMLINK libspdk_bdev_split.so 00:04:36.935 SO libspdk_bdev_gpt.so.6.0 00:04:36.935 SO libspdk_bdev_error.so.6.0 00:04:36.935 LIB libspdk_bdev_ftl.a 00:04:36.935 SO libspdk_bdev_null.so.6.0 00:04:36.935 SYMLINK libspdk_bdev_malloc.so 00:04:36.935 LIB libspdk_bdev_iscsi.a 00:04:36.935 SO libspdk_bdev_ftl.so.6.0 00:04:36.935 LIB libspdk_bdev_delay.a 00:04:36.935 SYMLINK libspdk_bdev_passthru.so 00:04:37.194 LIB libspdk_bdev_aio.a 00:04:37.194 SO libspdk_bdev_iscsi.so.6.0 00:04:37.194 SYMLINK libspdk_bdev_gpt.so 00:04:37.194 SYMLINK libspdk_bdev_error.so 00:04:37.194 SYMLINK libspdk_bdev_null.so 00:04:37.194 SO libspdk_bdev_delay.so.6.0 00:04:37.194 LIB libspdk_bdev_zone_block.a 00:04:37.194 SO libspdk_bdev_aio.so.6.0 00:04:37.194 SYMLINK libspdk_bdev_ftl.so 00:04:37.194 SO libspdk_bdev_zone_block.so.6.0 00:04:37.194 SYMLINK libspdk_bdev_iscsi.so 00:04:37.194 SYMLINK libspdk_bdev_delay.so 00:04:37.194 SYMLINK libspdk_bdev_aio.so 00:04:37.194 SYMLINK libspdk_bdev_zone_block.so 00:04:37.194 LIB libspdk_bdev_lvol.a 00:04:37.194 SO libspdk_bdev_lvol.so.6.0 00:04:37.194 LIB libspdk_bdev_virtio.a 00:04:37.194 SO libspdk_bdev_virtio.so.6.0 00:04:37.194 SYMLINK libspdk_bdev_lvol.so 00:04:37.452 SYMLINK libspdk_bdev_virtio.so 00:04:37.713 LIB libspdk_bdev_raid.a 00:04:37.713 SO libspdk_bdev_raid.so.6.0 00:04:37.973 SYMLINK libspdk_bdev_raid.so 00:04:38.914 LIB libspdk_bdev_nvme.a 00:04:38.914 SO libspdk_bdev_nvme.so.7.0 00:04:39.172 SYMLINK libspdk_bdev_nvme.so 00:04:39.432 CC module/event/subsystems/scheduler/scheduler.o 00:04:39.432 CC module/event/subsystems/sock/sock.o 00:04:39.432 CC module/event/subsystems/keyring/keyring.o 00:04:39.432 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:39.432 CC module/event/subsystems/vmd/vmd.o 00:04:39.432 CC module/event/subsystems/fsdev/fsdev.o 00:04:39.432 CC module/event/subsystems/iobuf/iobuf.o 00:04:39.432 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:39.432 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:39.432 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:39.690 LIB libspdk_event_keyring.a 00:04:39.690 LIB libspdk_event_vhost_blk.a 00:04:39.690 LIB libspdk_event_fsdev.a 00:04:39.690 LIB libspdk_event_vfu_tgt.a 00:04:39.690 LIB libspdk_event_scheduler.a 00:04:39.690 LIB libspdk_event_vmd.a 00:04:39.690 LIB libspdk_event_sock.a 00:04:39.690 SO libspdk_event_keyring.so.1.0 00:04:39.690 SO libspdk_event_vhost_blk.so.3.0 00:04:39.690 LIB libspdk_event_iobuf.a 00:04:39.690 SO libspdk_event_fsdev.so.1.0 00:04:39.690 SO libspdk_event_vfu_tgt.so.3.0 00:04:39.690 SO libspdk_event_scheduler.so.4.0 00:04:39.690 SO libspdk_event_sock.so.5.0 00:04:39.690 SO libspdk_event_vmd.so.6.0 00:04:39.690 SO libspdk_event_iobuf.so.3.0 00:04:39.690 SYMLINK libspdk_event_vhost_blk.so 00:04:39.690 SYMLINK libspdk_event_keyring.so 00:04:39.690 SYMLINK libspdk_event_fsdev.so 00:04:39.690 SYMLINK libspdk_event_vfu_tgt.so 00:04:39.690 SYMLINK libspdk_event_scheduler.so 00:04:39.691 SYMLINK libspdk_event_sock.so 00:04:39.691 SYMLINK libspdk_event_vmd.so 00:04:39.691 SYMLINK libspdk_event_iobuf.so 00:04:39.950 CC module/event/subsystems/accel/accel.o 00:04:40.209 LIB libspdk_event_accel.a 00:04:40.209 SO libspdk_event_accel.so.6.0 00:04:40.209 SYMLINK libspdk_event_accel.so 00:04:40.468 CC module/event/subsystems/bdev/bdev.o 00:04:40.468 LIB libspdk_event_bdev.a 00:04:40.468 SO libspdk_event_bdev.so.6.0 00:04:40.468 SYMLINK libspdk_event_bdev.so 00:04:40.727 CC module/event/subsystems/scsi/scsi.o 00:04:40.727 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:40.727 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:40.727 CC module/event/subsystems/nbd/nbd.o 00:04:40.727 CC module/event/subsystems/ublk/ublk.o 00:04:40.989 LIB libspdk_event_ublk.a 00:04:40.989 LIB libspdk_event_nbd.a 00:04:40.989 LIB libspdk_event_scsi.a 00:04:40.989 SO libspdk_event_nbd.so.6.0 00:04:40.989 SO libspdk_event_ublk.so.3.0 00:04:40.989 SO libspdk_event_scsi.so.6.0 00:04:40.989 SYMLINK libspdk_event_nbd.so 00:04:40.989 SYMLINK libspdk_event_ublk.so 00:04:40.989 SYMLINK libspdk_event_scsi.so 00:04:40.989 LIB libspdk_event_nvmf.a 00:04:40.989 SO libspdk_event_nvmf.so.6.0 00:04:40.989 SYMLINK libspdk_event_nvmf.so 00:04:41.247 CC module/event/subsystems/iscsi/iscsi.o 00:04:41.247 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:41.247 LIB libspdk_event_vhost_scsi.a 00:04:41.247 LIB libspdk_event_iscsi.a 00:04:41.247 SO libspdk_event_vhost_scsi.so.3.0 00:04:41.247 SO libspdk_event_iscsi.so.6.0 00:04:41.505 SYMLINK libspdk_event_vhost_scsi.so 00:04:41.505 SYMLINK libspdk_event_iscsi.so 00:04:41.505 SO libspdk.so.6.0 00:04:41.505 SYMLINK libspdk.so 00:04:41.769 CXX app/trace/trace.o 00:04:41.769 CC test/rpc_client/rpc_client_test.o 00:04:41.769 CC app/trace_record/trace_record.o 00:04:41.769 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.769 CC app/spdk_lspci/spdk_lspci.o 00:04:41.769 CC app/spdk_nvme_identify/identify.o 00:04:41.769 CC app/spdk_nvme_perf/perf.o 00:04:41.769 CC app/spdk_top/spdk_top.o 00:04:41.769 TEST_HEADER include/spdk/accel_module.h 00:04:41.769 TEST_HEADER include/spdk/accel.h 00:04:41.769 TEST_HEADER include/spdk/assert.h 00:04:41.769 TEST_HEADER include/spdk/barrier.h 00:04:41.769 TEST_HEADER include/spdk/base64.h 00:04:41.769 TEST_HEADER include/spdk/bdev.h 00:04:41.769 TEST_HEADER include/spdk/bdev_module.h 00:04:41.769 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.769 TEST_HEADER include/spdk/bit_array.h 00:04:41.769 TEST_HEADER include/spdk/bit_pool.h 00:04:41.769 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.769 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.769 TEST_HEADER include/spdk/blobfs.h 00:04:41.769 TEST_HEADER include/spdk/blob.h 00:04:41.769 TEST_HEADER include/spdk/conf.h 00:04:41.769 TEST_HEADER include/spdk/config.h 00:04:41.769 TEST_HEADER include/spdk/cpuset.h 00:04:41.769 TEST_HEADER include/spdk/crc16.h 00:04:41.769 TEST_HEADER include/spdk/crc32.h 00:04:41.769 TEST_HEADER include/spdk/dif.h 00:04:41.769 TEST_HEADER include/spdk/crc64.h 00:04:41.769 TEST_HEADER include/spdk/dma.h 00:04:41.769 TEST_HEADER include/spdk/endian.h 00:04:41.769 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.769 TEST_HEADER include/spdk/env.h 00:04:41.769 TEST_HEADER include/spdk/fd_group.h 00:04:41.769 TEST_HEADER include/spdk/event.h 00:04:41.769 TEST_HEADER include/spdk/fd.h 00:04:41.769 TEST_HEADER include/spdk/file.h 00:04:41.769 TEST_HEADER include/spdk/fsdev.h 00:04:41.769 TEST_HEADER include/spdk/fsdev_module.h 00:04:41.769 TEST_HEADER include/spdk/ftl.h 00:04:41.769 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:41.769 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.769 TEST_HEADER include/spdk/hexlify.h 00:04:41.769 TEST_HEADER include/spdk/histogram_data.h 00:04:41.769 TEST_HEADER include/spdk/idxd.h 00:04:41.769 TEST_HEADER include/spdk/init.h 00:04:41.769 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.769 TEST_HEADER include/spdk/ioat.h 00:04:41.769 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.769 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.769 TEST_HEADER include/spdk/json.h 00:04:41.769 TEST_HEADER include/spdk/keyring.h 00:04:41.769 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.769 TEST_HEADER include/spdk/keyring_module.h 00:04:41.769 TEST_HEADER include/spdk/likely.h 00:04:41.769 TEST_HEADER include/spdk/log.h 00:04:41.769 TEST_HEADER include/spdk/lvol.h 00:04:41.769 TEST_HEADER include/spdk/md5.h 00:04:41.769 TEST_HEADER include/spdk/memory.h 00:04:41.769 TEST_HEADER include/spdk/mmio.h 00:04:41.769 TEST_HEADER include/spdk/nbd.h 00:04:41.769 TEST_HEADER include/spdk/net.h 00:04:41.769 TEST_HEADER include/spdk/nvme.h 00:04:41.769 TEST_HEADER include/spdk/notify.h 00:04:41.769 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.769 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.769 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.769 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.769 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.769 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.769 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.769 TEST_HEADER include/spdk/nvmf.h 00:04:41.769 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.769 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.769 TEST_HEADER include/spdk/opal.h 00:04:41.769 TEST_HEADER include/spdk/opal_spec.h 00:04:41.769 TEST_HEADER include/spdk/pci_ids.h 00:04:41.769 TEST_HEADER include/spdk/pipe.h 00:04:41.769 TEST_HEADER include/spdk/queue.h 00:04:41.769 TEST_HEADER include/spdk/reduce.h 00:04:41.769 TEST_HEADER include/spdk/rpc.h 00:04:41.769 TEST_HEADER include/spdk/scheduler.h 00:04:41.769 TEST_HEADER include/spdk/scsi.h 00:04:41.769 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.769 TEST_HEADER include/spdk/sock.h 00:04:41.769 TEST_HEADER include/spdk/string.h 00:04:41.769 TEST_HEADER include/spdk/stdinc.h 00:04:41.769 TEST_HEADER include/spdk/thread.h 00:04:41.769 TEST_HEADER include/spdk/trace.h 00:04:41.769 TEST_HEADER include/spdk/trace_parser.h 00:04:41.769 TEST_HEADER include/spdk/tree.h 00:04:41.769 TEST_HEADER include/spdk/ublk.h 00:04:41.769 TEST_HEADER include/spdk/util.h 00:04:41.769 TEST_HEADER include/spdk/uuid.h 00:04:41.770 CC app/spdk_dd/spdk_dd.o 00:04:41.770 TEST_HEADER include/spdk/version.h 00:04:41.770 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.770 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.770 TEST_HEADER include/spdk/vhost.h 00:04:41.770 TEST_HEADER include/spdk/vmd.h 00:04:41.770 TEST_HEADER include/spdk/xor.h 00:04:41.770 TEST_HEADER include/spdk/zipf.h 00:04:41.770 CXX test/cpp_headers/accel.o 00:04:41.770 CXX test/cpp_headers/accel_module.o 00:04:41.770 CXX test/cpp_headers/assert.o 00:04:41.770 CXX test/cpp_headers/barrier.o 00:04:41.770 CXX test/cpp_headers/base64.o 00:04:41.770 CXX test/cpp_headers/bdev.o 00:04:41.770 CXX test/cpp_headers/bdev_module.o 00:04:41.770 CXX test/cpp_headers/bdev_zone.o 00:04:41.770 CXX test/cpp_headers/bit_array.o 00:04:41.770 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:41.770 CXX test/cpp_headers/bit_pool.o 00:04:41.770 CXX test/cpp_headers/blob_bdev.o 00:04:41.770 CC app/nvmf_tgt/nvmf_main.o 00:04:41.770 CXX test/cpp_headers/blobfs_bdev.o 00:04:41.770 CXX test/cpp_headers/blobfs.o 00:04:41.770 CXX test/cpp_headers/blob.o 00:04:41.770 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.770 CXX test/cpp_headers/conf.o 00:04:41.770 CXX test/cpp_headers/config.o 00:04:41.770 CXX test/cpp_headers/cpuset.o 00:04:41.770 CXX test/cpp_headers/crc16.o 00:04:41.770 CC examples/ioat/perf/perf.o 00:04:41.770 CXX test/cpp_headers/crc32.o 00:04:41.770 CC examples/util/zipf/zipf.o 00:04:41.770 CC app/spdk_tgt/spdk_tgt.o 00:04:41.770 CC examples/ioat/verify/verify.o 00:04:41.770 CC test/thread/poller_perf/poller_perf.o 00:04:41.770 CC test/env/memory/memory_ut.o 00:04:41.770 CC test/app/jsoncat/jsoncat.o 00:04:41.770 CC test/env/vtophys/vtophys.o 00:04:41.770 CC test/app/histogram_perf/histogram_perf.o 00:04:41.770 CC app/fio/nvme/fio_plugin.o 00:04:41.770 CC test/env/pci/pci_ut.o 00:04:41.770 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:41.770 CC test/app/stub/stub.o 00:04:41.770 CC test/dma/test_dma/test_dma.o 00:04:42.031 CC test/app/bdev_svc/bdev_svc.o 00:04:42.031 CC app/fio/bdev/fio_plugin.o 00:04:42.031 LINK spdk_lspci 00:04:42.031 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.031 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.031 LINK rpc_client_test 00:04:42.294 LINK spdk_nvme_discover 00:04:42.294 LINK poller_perf 00:04:42.294 LINK zipf 00:04:42.294 LINK jsoncat 00:04:42.294 CXX test/cpp_headers/crc64.o 00:04:42.294 CXX test/cpp_headers/dif.o 00:04:42.294 LINK vtophys 00:04:42.294 LINK histogram_perf 00:04:42.294 CXX test/cpp_headers/dma.o 00:04:42.294 LINK env_dpdk_post_init 00:04:42.294 CXX test/cpp_headers/endian.o 00:04:42.294 LINK nvmf_tgt 00:04:42.294 CXX test/cpp_headers/env_dpdk.o 00:04:42.294 LINK interrupt_tgt 00:04:42.294 CXX test/cpp_headers/env.o 00:04:42.294 CXX test/cpp_headers/event.o 00:04:42.294 CXX test/cpp_headers/fd_group.o 00:04:42.294 CXX test/cpp_headers/fd.o 00:04:42.294 LINK spdk_trace_record 00:04:42.294 LINK stub 00:04:42.294 CXX test/cpp_headers/file.o 00:04:42.294 CXX test/cpp_headers/fsdev.o 00:04:42.294 LINK iscsi_tgt 00:04:42.294 CXX test/cpp_headers/fsdev_module.o 00:04:42.294 LINK ioat_perf 00:04:42.294 LINK verify 00:04:42.294 CXX test/cpp_headers/ftl.o 00:04:42.294 CXX test/cpp_headers/fuse_dispatcher.o 00:04:42.294 LINK bdev_svc 00:04:42.294 CXX test/cpp_headers/gpt_spec.o 00:04:42.294 CXX test/cpp_headers/hexlify.o 00:04:42.294 LINK spdk_tgt 00:04:42.294 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:42.294 CXX test/cpp_headers/histogram_data.o 00:04:42.294 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.562 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.562 CXX test/cpp_headers/idxd.o 00:04:42.562 CXX test/cpp_headers/idxd_spec.o 00:04:42.562 CXX test/cpp_headers/init.o 00:04:42.562 LINK spdk_dd 00:04:42.562 CXX test/cpp_headers/ioat.o 00:04:42.562 CXX test/cpp_headers/ioat_spec.o 00:04:42.562 CXX test/cpp_headers/iscsi_spec.o 00:04:42.562 CXX test/cpp_headers/json.o 00:04:42.562 CXX test/cpp_headers/jsonrpc.o 00:04:42.562 CXX test/cpp_headers/keyring.o 00:04:42.562 CXX test/cpp_headers/keyring_module.o 00:04:42.562 CXX test/cpp_headers/likely.o 00:04:42.562 CXX test/cpp_headers/log.o 00:04:42.562 LINK pci_ut 00:04:42.562 CXX test/cpp_headers/lvol.o 00:04:42.562 CXX test/cpp_headers/md5.o 00:04:42.562 CXX test/cpp_headers/memory.o 00:04:42.562 CXX test/cpp_headers/mmio.o 00:04:42.562 LINK spdk_trace 00:04:42.832 CXX test/cpp_headers/nbd.o 00:04:42.832 CXX test/cpp_headers/net.o 00:04:42.832 CXX test/cpp_headers/notify.o 00:04:42.832 CXX test/cpp_headers/nvme.o 00:04:42.832 CXX test/cpp_headers/nvme_intel.o 00:04:42.832 CXX test/cpp_headers/nvme_ocssd.o 00:04:42.832 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:42.832 CXX test/cpp_headers/nvme_spec.o 00:04:42.832 CXX test/cpp_headers/nvme_zns.o 00:04:42.832 CC test/event/event_perf/event_perf.o 00:04:42.832 CC test/event/reactor/reactor.o 00:04:42.832 CC test/event/reactor_perf/reactor_perf.o 00:04:42.832 CC examples/sock/hello_world/hello_sock.o 00:04:42.832 CXX test/cpp_headers/nvmf_cmd.o 00:04:42.832 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.832 CC examples/idxd/perf/perf.o 00:04:42.832 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:42.832 CC test/event/app_repeat/app_repeat.o 00:04:42.832 CXX test/cpp_headers/nvmf.o 00:04:42.832 CC examples/vmd/led/led.o 00:04:42.832 CC examples/thread/thread/thread_ex.o 00:04:42.832 CC test/event/scheduler/scheduler.o 00:04:42.832 CXX test/cpp_headers/nvmf_spec.o 00:04:42.832 LINK test_dma 00:04:42.832 LINK nvme_fuzz 00:04:43.099 CXX test/cpp_headers/nvmf_transport.o 00:04:43.099 CXX test/cpp_headers/opal.o 00:04:43.099 CXX test/cpp_headers/opal_spec.o 00:04:43.099 CXX test/cpp_headers/pci_ids.o 00:04:43.099 CXX test/cpp_headers/pipe.o 00:04:43.099 CXX test/cpp_headers/queue.o 00:04:43.099 CXX test/cpp_headers/reduce.o 00:04:43.099 CXX test/cpp_headers/rpc.o 00:04:43.099 CXX test/cpp_headers/scheduler.o 00:04:43.099 CXX test/cpp_headers/scsi.o 00:04:43.099 CXX test/cpp_headers/scsi_spec.o 00:04:43.099 CXX test/cpp_headers/sock.o 00:04:43.099 CXX test/cpp_headers/stdinc.o 00:04:43.099 CXX test/cpp_headers/string.o 00:04:43.099 CXX test/cpp_headers/thread.o 00:04:43.099 CXX test/cpp_headers/trace.o 00:04:43.099 LINK reactor 00:04:43.099 LINK event_perf 00:04:43.099 CXX test/cpp_headers/trace_parser.o 00:04:43.099 LINK reactor_perf 00:04:43.099 LINK spdk_bdev 00:04:43.099 LINK lsvmd 00:04:43.099 CXX test/cpp_headers/tree.o 00:04:43.099 CXX test/cpp_headers/ublk.o 00:04:43.099 CXX test/cpp_headers/util.o 00:04:43.099 LINK led 00:04:43.099 CXX test/cpp_headers/uuid.o 00:04:43.359 LINK app_repeat 00:04:43.359 CXX test/cpp_headers/version.o 00:04:43.359 CXX test/cpp_headers/vfio_user_pci.o 00:04:43.359 LINK spdk_nvme_perf 00:04:43.359 CXX test/cpp_headers/vfio_user_spec.o 00:04:43.359 CXX test/cpp_headers/vhost.o 00:04:43.359 LINK spdk_nvme 00:04:43.359 LINK mem_callbacks 00:04:43.359 CXX test/cpp_headers/vmd.o 00:04:43.359 CC app/vhost/vhost.o 00:04:43.359 LINK vhost_fuzz 00:04:43.359 CXX test/cpp_headers/xor.o 00:04:43.359 CXX test/cpp_headers/zipf.o 00:04:43.359 LINK hello_sock 00:04:43.359 LINK spdk_nvme_identify 00:04:43.359 LINK scheduler 00:04:43.359 LINK thread 00:04:43.359 LINK spdk_top 00:04:43.619 LINK idxd_perf 00:04:43.619 CC test/nvme/sgl/sgl.o 00:04:43.619 CC test/nvme/startup/startup.o 00:04:43.619 CC test/nvme/e2edp/nvme_dp.o 00:04:43.619 CC test/nvme/fdp/fdp.o 00:04:43.619 CC test/nvme/reserve/reserve.o 00:04:43.619 CC test/nvme/boot_partition/boot_partition.o 00:04:43.619 CC test/nvme/overhead/overhead.o 00:04:43.619 CC test/nvme/reset/reset.o 00:04:43.619 CC test/nvme/fused_ordering/fused_ordering.o 00:04:43.619 CC test/nvme/err_injection/err_injection.o 00:04:43.619 CC test/nvme/simple_copy/simple_copy.o 00:04:43.619 CC test/nvme/compliance/nvme_compliance.o 00:04:43.619 CC test/nvme/cuse/cuse.o 00:04:43.619 CC test/nvme/aer/aer.o 00:04:43.619 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:43.619 CC test/nvme/connect_stress/connect_stress.o 00:04:43.619 CC test/accel/dif/dif.o 00:04:43.619 CC test/blobfs/mkfs/mkfs.o 00:04:43.619 LINK vhost 00:04:43.619 CC test/lvol/esnap/esnap.o 00:04:43.878 CC examples/nvme/hello_world/hello_world.o 00:04:43.878 CC examples/nvme/hotplug/hotplug.o 00:04:43.878 CC examples/nvme/arbitration/arbitration.o 00:04:43.879 CC examples/nvme/reconnect/reconnect.o 00:04:43.879 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.879 CC examples/nvme/abort/abort.o 00:04:43.879 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:43.879 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.879 LINK boot_partition 00:04:43.879 LINK fused_ordering 00:04:43.879 LINK doorbell_aers 00:04:43.879 LINK reserve 00:04:43.879 LINK connect_stress 00:04:43.879 LINK mkfs 00:04:43.879 LINK startup 00:04:43.879 LINK sgl 00:04:44.139 LINK simple_copy 00:04:44.139 LINK reset 00:04:44.139 LINK aer 00:04:44.139 LINK err_injection 00:04:44.139 CC examples/accel/perf/accel_perf.o 00:04:44.139 LINK memory_ut 00:04:44.139 LINK overhead 00:04:44.139 LINK nvme_compliance 00:04:44.139 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:44.139 LINK nvme_dp 00:04:44.139 CC examples/blob/cli/blobcli.o 00:04:44.139 CC examples/blob/hello_world/hello_blob.o 00:04:44.139 LINK hello_world 00:04:44.139 LINK hotplug 00:04:44.139 LINK cmb_copy 00:04:44.139 LINK pmr_persistence 00:04:44.139 LINK fdp 00:04:44.139 LINK arbitration 00:04:44.139 LINK reconnect 00:04:44.399 LINK abort 00:04:44.399 LINK dif 00:04:44.399 LINK hello_blob 00:04:44.399 LINK hello_fsdev 00:04:44.399 LINK nvme_manage 00:04:44.659 LINK accel_perf 00:04:44.659 LINK blobcli 00:04:44.918 CC test/bdev/bdevio/bdevio.o 00:04:44.918 CC examples/bdev/hello_world/hello_bdev.o 00:04:44.918 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.918 LINK iscsi_fuzz 00:04:45.177 LINK hello_bdev 00:04:45.177 LINK cuse 00:04:45.177 LINK bdevio 00:04:45.751 LINK bdevperf 00:04:46.010 CC examples/nvmf/nvmf/nvmf.o 00:04:46.576 LINK nvmf 00:04:49.111 LINK esnap 00:04:49.370 00:04:49.370 real 1m7.690s 00:04:49.370 user 9m4.223s 00:04:49.370 sys 1m58.479s 00:04:49.370 13:15:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:49.370 13:15:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:49.370 ************************************ 00:04:49.370 END TEST make 00:04:49.370 ************************************ 00:04:49.370 13:15:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:49.370 13:15:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:49.370 13:15:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:49.370 13:15:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.370 13:15:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:49.370 13:15:41 -- pm/common@44 -- $ pid=5980 00:04:49.370 13:15:41 -- pm/common@50 -- $ kill -TERM 5980 00:04:49.370 13:15:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.370 13:15:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:49.370 13:15:41 -- pm/common@44 -- $ pid=5982 00:04:49.370 13:15:41 -- pm/common@50 -- $ kill -TERM 5982 00:04:49.370 13:15:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.370 13:15:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:49.370 13:15:41 -- pm/common@44 -- $ pid=5984 00:04:49.370 13:15:41 -- pm/common@50 -- $ kill -TERM 5984 00:04:49.370 13:15:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.370 13:15:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:49.370 13:15:41 -- pm/common@44 -- $ pid=6012 00:04:49.370 13:15:41 -- pm/common@50 -- $ sudo -E kill -TERM 6012 00:04:49.370 13:15:41 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.370 13:15:41 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.370 13:15:41 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.370 13:15:41 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.370 13:15:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.370 13:15:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.370 13:15:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.370 13:15:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.370 13:15:41 -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.370 13:15:41 -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.370 13:15:41 -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.370 13:15:41 -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.370 13:15:41 -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.370 13:15:41 -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.370 13:15:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.370 13:15:41 -- scripts/common.sh@344 -- # case "$op" in 00:04:49.370 13:15:41 -- scripts/common.sh@345 -- # : 1 00:04:49.370 13:15:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.371 13:15:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.371 13:15:41 -- scripts/common.sh@365 -- # decimal 1 00:04:49.371 13:15:41 -- scripts/common.sh@353 -- # local d=1 00:04:49.371 13:15:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.371 13:15:41 -- scripts/common.sh@355 -- # echo 1 00:04:49.371 13:15:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.371 13:15:41 -- scripts/common.sh@366 -- # decimal 2 00:04:49.371 13:15:41 -- scripts/common.sh@353 -- # local d=2 00:04:49.371 13:15:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.371 13:15:41 -- scripts/common.sh@355 -- # echo 2 00:04:49.371 13:15:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.371 13:15:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.371 13:15:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.371 13:15:41 -- scripts/common.sh@368 -- # return 0 00:04:49.371 13:15:41 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.371 13:15:41 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 13:15:41 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 13:15:41 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 13:15:41 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 13:15:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.371 13:15:41 -- nvmf/common.sh@7 -- # uname -s 00:04:49.371 13:15:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.371 13:15:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.371 13:15:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.371 13:15:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.371 13:15:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.371 13:15:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.371 13:15:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.371 13:15:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.371 13:15:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.371 13:15:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.630 13:15:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:49.630 13:15:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:49.630 13:15:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.631 13:15:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.631 13:15:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:49.631 13:15:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.631 13:15:41 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:49.631 13:15:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.631 13:15:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.631 13:15:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.631 13:15:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.631 13:15:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.631 13:15:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.631 13:15:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.631 13:15:41 -- paths/export.sh@5 -- # export PATH 00:04:49.631 13:15:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.631 13:15:41 -- nvmf/common.sh@51 -- # : 0 00:04:49.631 13:15:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.631 13:15:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.631 13:15:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.631 13:15:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.631 13:15:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.631 13:15:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.631 13:15:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.631 13:15:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.631 13:15:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.631 13:15:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.631 13:15:41 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.631 13:15:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.631 13:15:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:49.631 13:15:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:49.631 13:15:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.631 13:15:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:49.631 13:15:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.631 13:15:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.631 13:15:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:49.631 13:15:41 -- spdk/autotest.sh@48 -- # udevadm_pid=87500 00:04:49.631 13:15:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:49.631 13:15:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:49.631 13:15:41 -- pm/common@17 -- # local monitor 00:04:49.631 13:15:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.631 13:15:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.631 13:15:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.631 13:15:41 -- pm/common@21 -- # date +%s 00:04:49.631 13:15:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.631 13:15:41 -- pm/common@21 -- # date +%s 00:04:49.631 13:15:41 -- pm/common@25 -- # sleep 1 00:04:49.631 13:15:41 -- pm/common@21 -- # date +%s 00:04:49.631 13:15:41 -- pm/common@21 -- # date +%s 00:04:49.631 13:15:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728904541 00:04:49.631 13:15:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728904541 00:04:49.631 13:15:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728904541 00:04:49.631 13:15:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728904541 00:04:49.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728904541_collect-vmstat.pm.log 00:04:49.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728904541_collect-cpu-load.pm.log 00:04:49.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728904541_collect-cpu-temp.pm.log 00:04:49.631 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728904541_collect-bmc-pm.bmc.pm.log 00:04:50.569 13:15:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:50.569 13:15:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:50.569 13:15:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.569 13:15:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.569 13:15:42 -- spdk/autotest.sh@59 -- # create_test_list 00:04:50.569 13:15:42 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:50.569 13:15:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.569 13:15:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:50.569 13:15:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.569 13:15:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.569 13:15:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:50.569 13:15:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.569 13:15:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.569 13:15:42 -- common/autotest_common.sh@1455 -- # uname 00:04:50.569 13:15:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:50.569 13:15:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.569 13:15:42 -- common/autotest_common.sh@1475 -- # uname 00:04:50.569 13:15:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:50.569 13:15:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:50.569 13:15:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:50.569 lcov: LCOV version 1.15 00:04:50.569 13:15:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:08.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.654 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.584 13:16:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.584 13:16:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.584 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.584 13:16:19 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.584 13:16:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.584 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:30.584 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:30.584 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:30.584 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:30.584 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:30.584 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:30.584 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:30.584 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:30.584 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:30.584 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:30.584 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:30.584 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:30.584 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:30.584 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:30.584 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:30.584 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:30.584 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:30.584 13:16:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:30.584 13:16:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:30.584 13:16:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:30.584 13:16:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:30.584 13:16:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.584 13:16:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:30.584 13:16:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:30.584 13:16:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.584 13:16:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.584 13:16:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:30.584 13:16:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.584 13:16:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.584 13:16:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:30.584 13:16:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:30.584 13:16:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.584 No valid GPT data, bailing 00:05:30.584 13:16:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.584 13:16:20 -- scripts/common.sh@394 -- # pt= 00:05:30.584 13:16:20 -- scripts/common.sh@395 -- # return 1 00:05:30.584 13:16:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.584 1+0 records in 00:05:30.584 1+0 records out 00:05:30.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00203967 s, 514 MB/s 00:05:30.584 13:16:20 -- spdk/autotest.sh@105 -- # sync 00:05:30.584 13:16:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.584 13:16:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.584 13:16:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.843 13:16:22 -- spdk/autotest.sh@111 -- # uname -s 00:05:30.843 13:16:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:30.843 13:16:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:30.843 13:16:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:32.223 Hugepages 00:05:32.224 node hugesize free / total 00:05:32.224 node0 1048576kB 0 / 0 00:05:32.224 node0 2048kB 0 / 0 00:05:32.224 node1 1048576kB 0 / 0 00:05:32.224 node1 2048kB 0 / 0 00:05:32.224 00:05:32.224 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.224 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:32.224 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:32.224 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:32.224 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:32.224 13:16:23 -- spdk/autotest.sh@117 -- # uname -s 00:05:32.224 13:16:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:32.224 13:16:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:32.224 13:16:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.605 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.605 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.605 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:34.549 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.549 13:16:26 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:35.492 13:16:27 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:35.492 13:16:27 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:35.492 13:16:27 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.492 13:16:27 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:35.492 13:16:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:35.492 13:16:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:35.492 13:16:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.492 13:16:27 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:35.492 13:16:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:35.750 13:16:27 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:35.750 13:16:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:35.750 13:16:27 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.687 Waiting for block devices as requested 00:05:36.946 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:36.946 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:36.946 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:37.206 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:37.206 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:37.206 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:37.206 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:37.467 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:37.467 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:37.467 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:37.727 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:37.727 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:37.727 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:37.727 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:37.988 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:37.988 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:37.988 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:38.249 13:16:29 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:38.249 13:16:29 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:38.249 13:16:29 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:38.249 13:16:29 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:38.249 13:16:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:38.249 13:16:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:38.249 13:16:29 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:38.249 13:16:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:38.249 13:16:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:38.249 13:16:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:38.249 13:16:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:38.249 13:16:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:38.249 13:16:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:38.249 13:16:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:38.249 13:16:29 -- common/autotest_common.sh@1541 -- # continue 00:05:38.249 13:16:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:38.249 13:16:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.249 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:38.249 13:16:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:38.249 13:16:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.249 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:05:38.249 13:16:29 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:39.637 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.637 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.637 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:40.579 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:40.579 13:16:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:40.579 13:16:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.579 13:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.579 13:16:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:40.579 13:16:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:40.579 13:16:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.579 13:16:32 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:40.579 13:16:32 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:40.579 13:16:32 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:40.579 13:16:32 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:40.579 13:16:32 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:40.579 13:16:32 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:40.579 13:16:32 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:40.579 13:16:32 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.579 13:16:32 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.579 13:16:32 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:40.840 13:16:32 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:40.840 13:16:32 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:40.840 13:16:32 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:40.840 13:16:32 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:40.840 13:16:32 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:40.840 13:16:32 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:40.840 13:16:32 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:40.840 13:16:32 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:40.840 13:16:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:40.840 13:16:32 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:40.840 13:16:32 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=98114 00:05:40.840 13:16:32 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.840 13:16:32 -- common/autotest_common.sh@1583 -- # waitforlisten 98114 00:05:40.840 13:16:32 -- common/autotest_common.sh@831 -- # '[' -z 98114 ']' 00:05:40.840 13:16:32 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.840 13:16:32 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.840 13:16:32 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.840 13:16:32 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.840 13:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.840 [2024-10-14 13:16:32.519019] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:05:40.840 [2024-10-14 13:16:32.519124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98114 ] 00:05:40.840 [2024-10-14 13:16:32.579994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.840 [2024-10-14 13:16:32.625874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.101 13:16:32 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.101 13:16:32 -- common/autotest_common.sh@864 -- # return 0 00:05:41.101 13:16:32 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:41.101 13:16:32 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:41.101 13:16:32 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:44.404 nvme0n1 00:05:44.404 13:16:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:44.404 [2024-10-14 13:16:36.235471] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:44.404 [2024-10-14 13:16:36.235535] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:44.404 request: 00:05:44.404 { 00:05:44.404 "nvme_ctrlr_name": "nvme0", 00:05:44.404 "password": "test", 00:05:44.404 "method": "bdev_nvme_opal_revert", 00:05:44.404 "req_id": 1 00:05:44.404 } 00:05:44.404 Got JSON-RPC error response 00:05:44.404 response: 00:05:44.404 { 00:05:44.404 "code": -32603, 00:05:44.404 "message": "Internal error" 00:05:44.404 } 00:05:44.404 13:16:36 -- common/autotest_common.sh@1589 -- # true 00:05:44.404 13:16:36 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:44.404 13:16:36 -- common/autotest_common.sh@1593 -- # killprocess 98114 00:05:44.404 13:16:36 -- common/autotest_common.sh@950 -- # '[' -z 98114 ']' 00:05:44.404 13:16:36 -- common/autotest_common.sh@954 -- # kill -0 98114 00:05:44.404 13:16:36 -- common/autotest_common.sh@955 -- # uname 00:05:44.404 13:16:36 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.404 13:16:36 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98114 00:05:44.664 13:16:36 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.664 13:16:36 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.664 13:16:36 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98114' 00:05:44.664 killing process with pid 98114 00:05:44.664 13:16:36 -- common/autotest_common.sh@969 -- # kill 98114 00:05:44.664 13:16:36 -- common/autotest_common.sh@974 -- # wait 98114 00:05:46.574 13:16:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:46.574 13:16:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:46.574 13:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.574 13:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.574 13:16:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:46.574 13:16:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.574 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.574 13:16:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:46.574 13:16:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:46.574 13:16:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.574 13:16:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.574 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.574 ************************************ 00:05:46.574 START TEST env 00:05:46.574 ************************************ 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:46.574 * Looking for test storage... 00:05:46.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.574 13:16:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.574 13:16:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.574 13:16:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.574 13:16:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.574 13:16:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.574 13:16:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.574 13:16:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.574 13:16:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.574 13:16:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.574 13:16:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.574 13:16:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.574 13:16:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:46.574 13:16:38 env -- scripts/common.sh@345 -- # : 1 00:05:46.574 13:16:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.574 13:16:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.574 13:16:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:46.574 13:16:38 env -- scripts/common.sh@353 -- # local d=1 00:05:46.574 13:16:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.574 13:16:38 env -- scripts/common.sh@355 -- # echo 1 00:05:46.574 13:16:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.574 13:16:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:46.574 13:16:38 env -- scripts/common.sh@353 -- # local d=2 00:05:46.574 13:16:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.574 13:16:38 env -- scripts/common.sh@355 -- # echo 2 00:05:46.574 13:16:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.574 13:16:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.574 13:16:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.574 13:16:38 env -- scripts/common.sh@368 -- # return 0 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.574 --rc genhtml_branch_coverage=1 00:05:46.574 --rc genhtml_function_coverage=1 00:05:46.574 --rc genhtml_legend=1 00:05:46.574 --rc geninfo_all_blocks=1 00:05:46.574 --rc geninfo_unexecuted_blocks=1 00:05:46.574 00:05:46.574 ' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.574 --rc genhtml_branch_coverage=1 00:05:46.574 --rc genhtml_function_coverage=1 00:05:46.574 --rc genhtml_legend=1 00:05:46.574 --rc geninfo_all_blocks=1 00:05:46.574 --rc geninfo_unexecuted_blocks=1 00:05:46.574 00:05:46.574 ' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.574 --rc genhtml_branch_coverage=1 00:05:46.574 --rc genhtml_function_coverage=1 00:05:46.574 --rc genhtml_legend=1 00:05:46.574 --rc geninfo_all_blocks=1 00:05:46.574 --rc geninfo_unexecuted_blocks=1 00:05:46.574 00:05:46.574 ' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.574 --rc genhtml_branch_coverage=1 00:05:46.574 --rc genhtml_function_coverage=1 00:05:46.574 --rc genhtml_legend=1 00:05:46.574 --rc geninfo_all_blocks=1 00:05:46.574 --rc geninfo_unexecuted_blocks=1 00:05:46.574 00:05:46.574 ' 00:05:46.574 13:16:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.574 13:16:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.574 13:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.574 ************************************ 00:05:46.574 START TEST env_memory 00:05:46.574 ************************************ 00:05:46.574 13:16:38 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.574 00:05:46.574 00:05:46.574 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.574 http://cunit.sourceforge.net/ 00:05:46.574 00:05:46.574 00:05:46.574 Suite: memory 00:05:46.574 Test: alloc and free memory map ...[2024-10-14 13:16:38.263619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.574 passed 00:05:46.574 Test: mem map translation ...[2024-10-14 13:16:38.283322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.574 [2024-10-14 13:16:38.283344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.574 [2024-10-14 13:16:38.283390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.574 [2024-10-14 13:16:38.283402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.574 passed 00:05:46.574 Test: mem map registration ...[2024-10-14 13:16:38.324095] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:46.574 [2024-10-14 13:16:38.324114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:46.574 passed 00:05:46.574 Test: mem map adjacent registrations ...passed 00:05:46.574 00:05:46.574 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.574 suites 1 1 n/a 0 0 00:05:46.574 tests 4 4 4 0 0 00:05:46.574 asserts 152 152 152 0 n/a 00:05:46.574 00:05:46.574 Elapsed time = 0.140 seconds 00:05:46.574 00:05:46.574 real 0m0.150s 00:05:46.574 user 0m0.141s 00:05:46.574 sys 0m0.008s 00:05:46.574 13:16:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.575 13:16:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.575 ************************************ 00:05:46.575 END TEST env_memory 00:05:46.575 ************************************ 00:05:46.575 13:16:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.575 13:16:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.575 13:16:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.575 13:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.575 ************************************ 00:05:46.575 START TEST env_vtophys 00:05:46.575 ************************************ 00:05:46.575 13:16:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.836 EAL: lib.eal log level changed from notice to debug 00:05:46.836 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.836 EAL: Detected lcore 1 as core 1 on socket 0 00:05:46.836 EAL: Detected lcore 2 as core 2 on socket 0 00:05:46.836 EAL: Detected lcore 3 as core 3 on socket 0 00:05:46.836 EAL: Detected lcore 4 as core 4 on socket 0 00:05:46.836 EAL: Detected lcore 5 as core 5 on socket 0 00:05:46.836 EAL: Detected lcore 6 as core 8 on socket 0 00:05:46.836 EAL: Detected lcore 7 as core 9 on socket 0 00:05:46.836 EAL: Detected lcore 8 as core 10 on socket 0 00:05:46.836 EAL: Detected lcore 9 as core 11 on socket 0 00:05:46.836 EAL: Detected lcore 10 as core 12 on socket 0 00:05:46.836 EAL: Detected lcore 11 as core 13 on socket 0 00:05:46.836 EAL: Detected lcore 12 as core 0 on socket 1 00:05:46.836 EAL: Detected lcore 13 as core 1 on socket 1 00:05:46.836 EAL: Detected lcore 14 as core 2 on socket 1 00:05:46.836 EAL: Detected lcore 15 as core 3 on socket 1 00:05:46.836 EAL: Detected lcore 16 as core 4 on socket 1 00:05:46.836 EAL: Detected lcore 17 as core 5 on socket 1 00:05:46.836 EAL: Detected lcore 18 as core 8 on socket 1 00:05:46.836 EAL: Detected lcore 19 as core 9 on socket 1 00:05:46.836 EAL: Detected lcore 20 as core 10 on socket 1 00:05:46.836 EAL: Detected lcore 21 as core 11 on socket 1 00:05:46.836 EAL: Detected lcore 22 as core 12 on socket 1 00:05:46.836 EAL: Detected lcore 23 as core 13 on socket 1 00:05:46.836 EAL: Detected lcore 24 as core 0 on socket 0 00:05:46.836 EAL: Detected lcore 25 as core 1 on socket 0 00:05:46.836 EAL: Detected lcore 26 as core 2 on socket 0 00:05:46.836 EAL: Detected lcore 27 as core 3 on socket 0 00:05:46.836 EAL: Detected lcore 28 as core 4 on socket 0 00:05:46.836 EAL: Detected lcore 29 as core 5 on socket 0 00:05:46.836 EAL: Detected lcore 30 as core 8 on socket 0 00:05:46.836 EAL: Detected lcore 31 as core 9 on socket 0 00:05:46.836 EAL: Detected lcore 32 as core 10 on socket 0 00:05:46.836 EAL: Detected lcore 33 as core 11 on socket 0 00:05:46.836 EAL: Detected lcore 34 as core 12 on socket 0 00:05:46.836 EAL: Detected lcore 35 as core 13 on socket 0 00:05:46.836 EAL: Detected lcore 36 as core 0 on socket 1 00:05:46.836 EAL: Detected lcore 37 as core 1 on socket 1 00:05:46.836 EAL: Detected lcore 38 as core 2 on socket 1 00:05:46.836 EAL: Detected lcore 39 as core 3 on socket 1 00:05:46.836 EAL: Detected lcore 40 as core 4 on socket 1 00:05:46.836 EAL: Detected lcore 41 as core 5 on socket 1 00:05:46.836 EAL: Detected lcore 42 as core 8 on socket 1 00:05:46.836 EAL: Detected lcore 43 as core 9 on socket 1 00:05:46.836 EAL: Detected lcore 44 as core 10 on socket 1 00:05:46.836 EAL: Detected lcore 45 as core 11 on socket 1 00:05:46.836 EAL: Detected lcore 46 as core 12 on socket 1 00:05:46.836 EAL: Detected lcore 47 as core 13 on socket 1 00:05:46.836 EAL: Maximum logical cores by configuration: 128 00:05:46.836 EAL: Detected CPU lcores: 48 00:05:46.836 EAL: Detected NUMA nodes: 2 00:05:46.836 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:46.836 EAL: Detected shared linkage of DPDK 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:46.836 EAL: Registered [vdev] bus. 00:05:46.836 EAL: bus.vdev log level changed from disabled to notice 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:46.836 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:46.836 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:46.836 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:46.836 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.836 EAL: No shared files mode enabled, IPC is disabled 00:05:46.836 EAL: Bus pci wants IOVA as 'DC' 00:05:46.836 EAL: Bus vdev wants IOVA as 'DC' 00:05:46.836 EAL: Buses did not request a specific IOVA mode. 00:05:46.836 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:46.836 EAL: Selected IOVA mode 'VA' 00:05:46.836 EAL: Probing VFIO support... 00:05:46.836 EAL: IOMMU type 1 (Type 1) is supported 00:05:46.836 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:46.836 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:46.836 EAL: VFIO support initialized 00:05:46.836 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.836 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.836 EAL: Setting up physically contiguous memory... 00:05:46.836 EAL: Setting maximum number of open files to 524288 00:05:46.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.836 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:46.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:46.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:46.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.836 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:46.836 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.836 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:46.836 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:46.837 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.837 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:46.837 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.837 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.837 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:46.837 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:46.837 EAL: Hugepages will be freed exactly as allocated. 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: TSC frequency is ~2700000 KHz 00:05:46.837 EAL: Main lcore 0 is ready (tid=7f6307d2ba00;cpuset=[0]) 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 0 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 2MB 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:46.837 EAL: Mem event callback 'spdk:(nil)' registered 00:05:46.837 00:05:46.837 00:05:46.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.837 http://cunit.sourceforge.net/ 00:05:46.837 00:05:46.837 00:05:46.837 Suite: components_suite 00:05:46.837 Test: vtophys_malloc_test ...passed 00:05:46.837 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 4MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 4MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 6MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 6MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 10MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 10MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 18MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 18MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 34MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 34MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.837 EAL: Restoring previous memory policy: 4 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.837 EAL: request: mp_malloc_sync 00:05:46.837 EAL: No shared files mode enabled, IPC is disabled 00:05:46.837 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.837 EAL: Trying to obtain current memory policy. 00:05:46.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.098 EAL: Restoring previous memory policy: 4 00:05:47.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.098 EAL: request: mp_malloc_sync 00:05:47.098 EAL: No shared files mode enabled, IPC is disabled 00:05:47.098 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.098 EAL: request: mp_malloc_sync 00:05:47.098 EAL: No shared files mode enabled, IPC is disabled 00:05:47.098 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.098 EAL: Trying to obtain current memory policy. 00:05:47.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.358 EAL: Restoring previous memory policy: 4 00:05:47.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.358 EAL: request: mp_malloc_sync 00:05:47.358 EAL: No shared files mode enabled, IPC is disabled 00:05:47.358 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.358 EAL: request: mp_malloc_sync 00:05:47.358 EAL: No shared files mode enabled, IPC is disabled 00:05:47.358 EAL: Heap on socket 0 was shrunk by 514MB 00:05:47.358 EAL: Trying to obtain current memory policy. 00:05:47.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.618 EAL: Restoring previous memory policy: 4 00:05:47.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.618 EAL: request: mp_malloc_sync 00:05:47.618 EAL: No shared files mode enabled, IPC is disabled 00:05:47.618 EAL: Heap on socket 0 was expanded by 1026MB 00:05:47.879 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.139 EAL: request: mp_malloc_sync 00:05:48.139 EAL: No shared files mode enabled, IPC is disabled 00:05:48.139 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.139 passed 00:05:48.139 00:05:48.139 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.139 suites 1 1 n/a 0 0 00:05:48.139 tests 2 2 2 0 0 00:05:48.139 asserts 497 497 497 0 n/a 00:05:48.139 00:05:48.139 Elapsed time = 1.321 seconds 00:05:48.139 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.139 EAL: request: mp_malloc_sync 00:05:48.139 EAL: No shared files mode enabled, IPC is disabled 00:05:48.139 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.139 EAL: No shared files mode enabled, IPC is disabled 00:05:48.139 EAL: No shared files mode enabled, IPC is disabled 00:05:48.139 EAL: No shared files mode enabled, IPC is disabled 00:05:48.139 00:05:48.139 real 0m1.442s 00:05:48.139 user 0m0.837s 00:05:48.139 sys 0m0.564s 00:05:48.139 13:16:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.139 13:16:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.139 ************************************ 00:05:48.139 END TEST env_vtophys 00:05:48.139 ************************************ 00:05:48.139 13:16:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.139 13:16:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.139 13:16:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.139 13:16:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.139 ************************************ 00:05:48.139 START TEST env_pci 00:05:48.139 ************************************ 00:05:48.139 13:16:39 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.139 00:05:48.139 00:05:48.139 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.139 http://cunit.sourceforge.net/ 00:05:48.139 00:05:48.139 00:05:48.139 Suite: pci 00:05:48.139 Test: pci_hook ...[2024-10-14 13:16:39.930311] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 99011 has claimed it 00:05:48.139 EAL: Cannot find device (10000:00:01.0) 00:05:48.139 EAL: Failed to attach device on primary process 00:05:48.139 passed 00:05:48.139 00:05:48.139 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.139 suites 1 1 n/a 0 0 00:05:48.139 tests 1 1 1 0 0 00:05:48.139 asserts 25 25 25 0 n/a 00:05:48.139 00:05:48.139 Elapsed time = 0.022 seconds 00:05:48.139 00:05:48.139 real 0m0.036s 00:05:48.139 user 0m0.012s 00:05:48.139 sys 0m0.023s 00:05:48.139 13:16:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.139 13:16:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.139 ************************************ 00:05:48.140 END TEST env_pci 00:05:48.140 ************************************ 00:05:48.140 13:16:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.140 13:16:39 env -- env/env.sh@15 -- # uname 00:05:48.140 13:16:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.140 13:16:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.140 13:16:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.140 13:16:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:48.140 13:16:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.140 13:16:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.400 ************************************ 00:05:48.400 START TEST env_dpdk_post_init 00:05:48.400 ************************************ 00:05:48.400 13:16:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.400 EAL: Detected CPU lcores: 48 00:05:48.400 EAL: Detected NUMA nodes: 2 00:05:48.400 EAL: Detected shared linkage of DPDK 00:05:48.400 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.400 EAL: Selected IOVA mode 'VA' 00:05:48.400 EAL: VFIO support initialized 00:05:48.400 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.400 EAL: Using IOMMU type 1 (Type 1) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:48.400 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:49.233 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:52.539 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:52.539 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:52.798 Starting DPDK initialization... 00:05:52.798 Starting SPDK post initialization... 00:05:52.798 SPDK NVMe probe 00:05:52.798 Attaching to 0000:88:00.0 00:05:52.798 Attached to 0000:88:00.0 00:05:52.798 Cleaning up... 00:05:52.798 00:05:52.798 real 0m4.428s 00:05:52.798 user 0m3.291s 00:05:52.798 sys 0m0.196s 00:05:52.798 13:16:44 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.798 13:16:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.798 ************************************ 00:05:52.798 END TEST env_dpdk_post_init 00:05:52.798 ************************************ 00:05:52.798 13:16:44 env -- env/env.sh@26 -- # uname 00:05:52.798 13:16:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.798 13:16:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.798 13:16:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.798 13:16:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.798 13:16:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.798 ************************************ 00:05:52.798 START TEST env_mem_callbacks 00:05:52.798 ************************************ 00:05:52.798 13:16:44 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.798 EAL: Detected CPU lcores: 48 00:05:52.798 EAL: Detected NUMA nodes: 2 00:05:52.798 EAL: Detected shared linkage of DPDK 00:05:52.798 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.798 EAL: Selected IOVA mode 'VA' 00:05:52.798 EAL: VFIO support initialized 00:05:52.798 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.798 00:05:52.798 00:05:52.798 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.798 http://cunit.sourceforge.net/ 00:05:52.798 00:05:52.798 00:05:52.798 Suite: memory 00:05:52.798 Test: test ... 00:05:52.798 register 0x200000200000 2097152 00:05:52.798 malloc 3145728 00:05:52.798 register 0x200000400000 4194304 00:05:52.798 buf 0x200000500000 len 3145728 PASSED 00:05:52.798 malloc 64 00:05:52.798 buf 0x2000004fff40 len 64 PASSED 00:05:52.798 malloc 4194304 00:05:52.798 register 0x200000800000 6291456 00:05:52.798 buf 0x200000a00000 len 4194304 PASSED 00:05:52.798 free 0x200000500000 3145728 00:05:52.798 free 0x2000004fff40 64 00:05:52.798 unregister 0x200000400000 4194304 PASSED 00:05:52.798 free 0x200000a00000 4194304 00:05:52.798 unregister 0x200000800000 6291456 PASSED 00:05:52.798 malloc 8388608 00:05:52.798 register 0x200000400000 10485760 00:05:52.798 buf 0x200000600000 len 8388608 PASSED 00:05:52.798 free 0x200000600000 8388608 00:05:52.798 unregister 0x200000400000 10485760 PASSED 00:05:52.798 passed 00:05:52.798 00:05:52.798 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.798 suites 1 1 n/a 0 0 00:05:52.798 tests 1 1 1 0 0 00:05:52.798 asserts 15 15 15 0 n/a 00:05:52.798 00:05:52.798 Elapsed time = 0.005 seconds 00:05:52.798 00:05:52.798 real 0m0.049s 00:05:52.798 user 0m0.010s 00:05:52.798 sys 0m0.037s 00:05:52.798 13:16:44 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.798 13:16:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.798 ************************************ 00:05:52.798 END TEST env_mem_callbacks 00:05:52.798 ************************************ 00:05:52.798 00:05:52.799 real 0m6.497s 00:05:52.799 user 0m4.477s 00:05:52.799 sys 0m1.061s 00:05:52.799 13:16:44 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.799 13:16:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.799 ************************************ 00:05:52.799 END TEST env 00:05:52.799 ************************************ 00:05:52.799 13:16:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:52.799 13:16:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.799 13:16:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.799 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:05:52.799 ************************************ 00:05:52.799 START TEST rpc 00:05:52.799 ************************************ 00:05:52.799 13:16:44 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.058 * Looking for test storage... 00:05:53.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.058 13:16:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.058 13:16:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.058 13:16:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.058 13:16:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.058 13:16:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.058 13:16:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.058 13:16:44 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.058 13:16:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.058 13:16:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.058 13:16:44 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.058 13:16:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.058 13:16:44 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.058 13:16:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.058 13:16:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.058 13:16:44 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 00:05:53.058 ' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 00:05:53.058 ' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 00:05:53.058 ' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.058 --rc genhtml_branch_coverage=1 00:05:53.058 --rc genhtml_function_coverage=1 00:05:53.058 --rc genhtml_legend=1 00:05:53.058 --rc geninfo_all_blocks=1 00:05:53.058 --rc geninfo_unexecuted_blocks=1 00:05:53.058 00:05:53.058 ' 00:05:53.058 13:16:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99798 00:05:53.058 13:16:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:53.058 13:16:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.058 13:16:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99798 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 99798 ']' 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.058 13:16:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.058 [2024-10-14 13:16:44.813637] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:05:53.058 [2024-10-14 13:16:44.813736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99798 ] 00:05:53.058 [2024-10-14 13:16:44.872590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.319 [2024-10-14 13:16:44.919532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.319 [2024-10-14 13:16:44.919582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99798' to capture a snapshot of events at runtime. 00:05:53.319 [2024-10-14 13:16:44.919614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.319 [2024-10-14 13:16:44.919626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.319 [2024-10-14 13:16:44.919635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99798 for offline analysis/debug. 00:05:53.319 [2024-10-14 13:16:44.920178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.319 13:16:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.319 13:16:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.580 13:16:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.580 13:16:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.580 13:16:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:53.580 13:16:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:53.580 13:16:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.580 13:16:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.580 13:16:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.580 ************************************ 00:05:53.580 START TEST rpc_integrity 00:05:53.580 ************************************ 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.580 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.580 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.580 { 00:05:53.580 "name": "Malloc0", 00:05:53.580 "aliases": [ 00:05:53.580 "1242db96-0e72-4e34-b995-5597f020ca0a" 00:05:53.580 ], 00:05:53.580 "product_name": "Malloc disk", 00:05:53.580 "block_size": 512, 00:05:53.580 "num_blocks": 16384, 00:05:53.580 "uuid": "1242db96-0e72-4e34-b995-5597f020ca0a", 00:05:53.580 "assigned_rate_limits": { 00:05:53.580 "rw_ios_per_sec": 0, 00:05:53.580 "rw_mbytes_per_sec": 0, 00:05:53.580 "r_mbytes_per_sec": 0, 00:05:53.580 "w_mbytes_per_sec": 0 00:05:53.580 }, 00:05:53.580 "claimed": false, 00:05:53.580 "zoned": false, 00:05:53.580 "supported_io_types": { 00:05:53.580 "read": true, 00:05:53.581 "write": true, 00:05:53.581 "unmap": true, 00:05:53.581 "flush": true, 00:05:53.581 "reset": true, 00:05:53.581 "nvme_admin": false, 00:05:53.581 "nvme_io": false, 00:05:53.581 "nvme_io_md": false, 00:05:53.581 "write_zeroes": true, 00:05:53.581 "zcopy": true, 00:05:53.581 "get_zone_info": false, 00:05:53.581 "zone_management": false, 00:05:53.581 "zone_append": false, 00:05:53.581 "compare": false, 00:05:53.581 "compare_and_write": false, 00:05:53.581 "abort": true, 00:05:53.581 "seek_hole": false, 00:05:53.581 "seek_data": false, 00:05:53.581 "copy": true, 00:05:53.581 "nvme_iov_md": false 00:05:53.581 }, 00:05:53.581 "memory_domains": [ 00:05:53.581 { 00:05:53.581 "dma_device_id": "system", 00:05:53.581 "dma_device_type": 1 00:05:53.581 }, 00:05:53.581 { 00:05:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.581 "dma_device_type": 2 00:05:53.581 } 00:05:53.581 ], 00:05:53.581 "driver_specific": {} 00:05:53.581 } 00:05:53.581 ]' 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 [2024-10-14 13:16:45.296940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:53.581 [2024-10-14 13:16:45.296990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.581 [2024-10-14 13:16:45.297010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a0c40 00:05:53.581 [2024-10-14 13:16:45.297023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.581 [2024-10-14 13:16:45.298380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.581 [2024-10-14 13:16:45.298406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.581 Passthru0 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:53.581 { 00:05:53.581 "name": "Malloc0", 00:05:53.581 "aliases": [ 00:05:53.581 "1242db96-0e72-4e34-b995-5597f020ca0a" 00:05:53.581 ], 00:05:53.581 "product_name": "Malloc disk", 00:05:53.581 "block_size": 512, 00:05:53.581 "num_blocks": 16384, 00:05:53.581 "uuid": "1242db96-0e72-4e34-b995-5597f020ca0a", 00:05:53.581 "assigned_rate_limits": { 00:05:53.581 "rw_ios_per_sec": 0, 00:05:53.581 "rw_mbytes_per_sec": 0, 00:05:53.581 "r_mbytes_per_sec": 0, 00:05:53.581 "w_mbytes_per_sec": 0 00:05:53.581 }, 00:05:53.581 "claimed": true, 00:05:53.581 "claim_type": "exclusive_write", 00:05:53.581 "zoned": false, 00:05:53.581 "supported_io_types": { 00:05:53.581 "read": true, 00:05:53.581 "write": true, 00:05:53.581 "unmap": true, 00:05:53.581 "flush": true, 00:05:53.581 "reset": true, 00:05:53.581 "nvme_admin": false, 00:05:53.581 "nvme_io": false, 00:05:53.581 "nvme_io_md": false, 00:05:53.581 "write_zeroes": true, 00:05:53.581 "zcopy": true, 00:05:53.581 "get_zone_info": false, 00:05:53.581 "zone_management": false, 00:05:53.581 "zone_append": false, 00:05:53.581 "compare": false, 00:05:53.581 "compare_and_write": false, 00:05:53.581 "abort": true, 00:05:53.581 "seek_hole": false, 00:05:53.581 "seek_data": false, 00:05:53.581 "copy": true, 00:05:53.581 "nvme_iov_md": false 00:05:53.581 }, 00:05:53.581 "memory_domains": [ 00:05:53.581 { 00:05:53.581 "dma_device_id": "system", 00:05:53.581 "dma_device_type": 1 00:05:53.581 }, 00:05:53.581 { 00:05:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.581 "dma_device_type": 2 00:05:53.581 } 00:05:53.581 ], 00:05:53.581 "driver_specific": {} 00:05:53.581 }, 00:05:53.581 { 00:05:53.581 "name": "Passthru0", 00:05:53.581 "aliases": [ 00:05:53.581 "2412ba1b-0fdb-5d1c-b808-e35e6680f65a" 00:05:53.581 ], 00:05:53.581 "product_name": "passthru", 00:05:53.581 "block_size": 512, 00:05:53.581 "num_blocks": 16384, 00:05:53.581 "uuid": "2412ba1b-0fdb-5d1c-b808-e35e6680f65a", 00:05:53.581 "assigned_rate_limits": { 00:05:53.581 "rw_ios_per_sec": 0, 00:05:53.581 "rw_mbytes_per_sec": 0, 00:05:53.581 "r_mbytes_per_sec": 0, 00:05:53.581 "w_mbytes_per_sec": 0 00:05:53.581 }, 00:05:53.581 "claimed": false, 00:05:53.581 "zoned": false, 00:05:53.581 "supported_io_types": { 00:05:53.581 "read": true, 00:05:53.581 "write": true, 00:05:53.581 "unmap": true, 00:05:53.581 "flush": true, 00:05:53.581 "reset": true, 00:05:53.581 "nvme_admin": false, 00:05:53.581 "nvme_io": false, 00:05:53.581 "nvme_io_md": false, 00:05:53.581 "write_zeroes": true, 00:05:53.581 "zcopy": true, 00:05:53.581 "get_zone_info": false, 00:05:53.581 "zone_management": false, 00:05:53.581 "zone_append": false, 00:05:53.581 "compare": false, 00:05:53.581 "compare_and_write": false, 00:05:53.581 "abort": true, 00:05:53.581 "seek_hole": false, 00:05:53.581 "seek_data": false, 00:05:53.581 "copy": true, 00:05:53.581 "nvme_iov_md": false 00:05:53.581 }, 00:05:53.581 "memory_domains": [ 00:05:53.581 { 00:05:53.581 "dma_device_id": "system", 00:05:53.581 "dma_device_type": 1 00:05:53.581 }, 00:05:53.581 { 00:05:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.581 "dma_device_type": 2 00:05:53.581 } 00:05:53.581 ], 00:05:53.581 "driver_specific": { 00:05:53.581 "passthru": { 00:05:53.581 "name": "Passthru0", 00:05:53.581 "base_bdev_name": "Malloc0" 00:05:53.581 } 00:05:53.581 } 00:05:53.581 } 00:05:53.581 ]' 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:53.581 13:16:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:53.581 00:05:53.581 real 0m0.209s 00:05:53.581 user 0m0.137s 00:05:53.581 sys 0m0.018s 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.581 13:16:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.581 ************************************ 00:05:53.581 END TEST rpc_integrity 00:05:53.581 ************************************ 00:05:53.581 13:16:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:53.581 13:16:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.581 13:16:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.581 13:16:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 ************************************ 00:05:53.842 START TEST rpc_plugins 00:05:53.842 ************************************ 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:53.842 { 00:05:53.842 "name": "Malloc1", 00:05:53.842 "aliases": [ 00:05:53.842 "37f35a06-d8f8-42d7-84f5-95c1ee15df1d" 00:05:53.842 ], 00:05:53.842 "product_name": "Malloc disk", 00:05:53.842 "block_size": 4096, 00:05:53.842 "num_blocks": 256, 00:05:53.842 "uuid": "37f35a06-d8f8-42d7-84f5-95c1ee15df1d", 00:05:53.842 "assigned_rate_limits": { 00:05:53.842 "rw_ios_per_sec": 0, 00:05:53.842 "rw_mbytes_per_sec": 0, 00:05:53.842 "r_mbytes_per_sec": 0, 00:05:53.842 "w_mbytes_per_sec": 0 00:05:53.842 }, 00:05:53.842 "claimed": false, 00:05:53.842 "zoned": false, 00:05:53.842 "supported_io_types": { 00:05:53.842 "read": true, 00:05:53.842 "write": true, 00:05:53.842 "unmap": true, 00:05:53.842 "flush": true, 00:05:53.842 "reset": true, 00:05:53.842 "nvme_admin": false, 00:05:53.842 "nvme_io": false, 00:05:53.842 "nvme_io_md": false, 00:05:53.842 "write_zeroes": true, 00:05:53.842 "zcopy": true, 00:05:53.842 "get_zone_info": false, 00:05:53.842 "zone_management": false, 00:05:53.842 "zone_append": false, 00:05:53.842 "compare": false, 00:05:53.842 "compare_and_write": false, 00:05:53.842 "abort": true, 00:05:53.842 "seek_hole": false, 00:05:53.842 "seek_data": false, 00:05:53.842 "copy": true, 00:05:53.842 "nvme_iov_md": false 00:05:53.842 }, 00:05:53.842 "memory_domains": [ 00:05:53.842 { 00:05:53.842 "dma_device_id": "system", 00:05:53.842 "dma_device_type": 1 00:05:53.842 }, 00:05:53.842 { 00:05:53.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.842 "dma_device_type": 2 00:05:53.842 } 00:05:53.842 ], 00:05:53.842 "driver_specific": {} 00:05:53.842 } 00:05:53.842 ]' 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:53.842 13:16:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:53.842 00:05:53.842 real 0m0.108s 00:05:53.842 user 0m0.070s 00:05:53.842 sys 0m0.008s 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 ************************************ 00:05:53.842 END TEST rpc_plugins 00:05:53.842 ************************************ 00:05:53.842 13:16:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:53.842 13:16:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.842 13:16:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.842 13:16:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 ************************************ 00:05:53.842 START TEST rpc_trace_cmd_test 00:05:53.842 ************************************ 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:53.842 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99798", 00:05:53.842 "tpoint_group_mask": "0x8", 00:05:53.842 "iscsi_conn": { 00:05:53.842 "mask": "0x2", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "scsi": { 00:05:53.842 "mask": "0x4", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "bdev": { 00:05:53.842 "mask": "0x8", 00:05:53.842 "tpoint_mask": "0xffffffffffffffff" 00:05:53.842 }, 00:05:53.842 "nvmf_rdma": { 00:05:53.842 "mask": "0x10", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "nvmf_tcp": { 00:05:53.842 "mask": "0x20", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "ftl": { 00:05:53.842 "mask": "0x40", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "blobfs": { 00:05:53.842 "mask": "0x80", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "dsa": { 00:05:53.842 "mask": "0x200", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "thread": { 00:05:53.842 "mask": "0x400", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "nvme_pcie": { 00:05:53.842 "mask": "0x800", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "iaa": { 00:05:53.842 "mask": "0x1000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "nvme_tcp": { 00:05:53.842 "mask": "0x2000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "bdev_nvme": { 00:05:53.842 "mask": "0x4000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "sock": { 00:05:53.842 "mask": "0x8000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "blob": { 00:05:53.842 "mask": "0x10000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "bdev_raid": { 00:05:53.842 "mask": "0x20000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 }, 00:05:53.842 "scheduler": { 00:05:53.842 "mask": "0x40000", 00:05:53.842 "tpoint_mask": "0x0" 00:05:53.842 } 00:05:53.842 }' 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:53.842 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.104 00:05:54.104 real 0m0.183s 00:05:54.104 user 0m0.156s 00:05:54.104 sys 0m0.017s 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 ************************************ 00:05:54.104 END TEST rpc_trace_cmd_test 00:05:54.104 ************************************ 00:05:54.104 13:16:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.104 13:16:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.104 13:16:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.104 13:16:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.104 13:16:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.104 13:16:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 ************************************ 00:05:54.104 START TEST rpc_daemon_integrity 00:05:54.104 ************************************ 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.104 { 00:05:54.104 "name": "Malloc2", 00:05:54.104 "aliases": [ 00:05:54.104 "8045637e-2ad7-476a-ba97-58d19963d3d3" 00:05:54.104 ], 00:05:54.104 "product_name": "Malloc disk", 00:05:54.104 "block_size": 512, 00:05:54.104 "num_blocks": 16384, 00:05:54.104 "uuid": "8045637e-2ad7-476a-ba97-58d19963d3d3", 00:05:54.104 "assigned_rate_limits": { 00:05:54.104 "rw_ios_per_sec": 0, 00:05:54.104 "rw_mbytes_per_sec": 0, 00:05:54.104 "r_mbytes_per_sec": 0, 00:05:54.104 "w_mbytes_per_sec": 0 00:05:54.104 }, 00:05:54.104 "claimed": false, 00:05:54.104 "zoned": false, 00:05:54.104 "supported_io_types": { 00:05:54.104 "read": true, 00:05:54.104 "write": true, 00:05:54.104 "unmap": true, 00:05:54.104 "flush": true, 00:05:54.104 "reset": true, 00:05:54.104 "nvme_admin": false, 00:05:54.104 "nvme_io": false, 00:05:54.104 "nvme_io_md": false, 00:05:54.104 "write_zeroes": true, 00:05:54.104 "zcopy": true, 00:05:54.104 "get_zone_info": false, 00:05:54.104 "zone_management": false, 00:05:54.104 "zone_append": false, 00:05:54.104 "compare": false, 00:05:54.104 "compare_and_write": false, 00:05:54.104 "abort": true, 00:05:54.104 "seek_hole": false, 00:05:54.104 "seek_data": false, 00:05:54.104 "copy": true, 00:05:54.104 "nvme_iov_md": false 00:05:54.104 }, 00:05:54.104 "memory_domains": [ 00:05:54.104 { 00:05:54.104 "dma_device_id": "system", 00:05:54.104 "dma_device_type": 1 00:05:54.104 }, 00:05:54.104 { 00:05:54.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.104 "dma_device_type": 2 00:05:54.104 } 00:05:54.104 ], 00:05:54.104 "driver_specific": {} 00:05:54.104 } 00:05:54.104 ]' 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 [2024-10-14 13:16:45.943076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.104 [2024-10-14 13:16:45.943125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.104 [2024-10-14 13:16:45.943162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a4950 00:05:54.104 [2024-10-14 13:16:45.943192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.104 [2024-10-14 13:16:45.944367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.104 [2024-10-14 13:16:45.944391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.104 Passthru0 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.104 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.366 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.366 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.366 { 00:05:54.366 "name": "Malloc2", 00:05:54.366 "aliases": [ 00:05:54.366 "8045637e-2ad7-476a-ba97-58d19963d3d3" 00:05:54.366 ], 00:05:54.366 "product_name": "Malloc disk", 00:05:54.366 "block_size": 512, 00:05:54.366 "num_blocks": 16384, 00:05:54.366 "uuid": "8045637e-2ad7-476a-ba97-58d19963d3d3", 00:05:54.366 "assigned_rate_limits": { 00:05:54.366 "rw_ios_per_sec": 0, 00:05:54.366 "rw_mbytes_per_sec": 0, 00:05:54.366 "r_mbytes_per_sec": 0, 00:05:54.366 "w_mbytes_per_sec": 0 00:05:54.366 }, 00:05:54.366 "claimed": true, 00:05:54.366 "claim_type": "exclusive_write", 00:05:54.366 "zoned": false, 00:05:54.366 "supported_io_types": { 00:05:54.366 "read": true, 00:05:54.366 "write": true, 00:05:54.366 "unmap": true, 00:05:54.366 "flush": true, 00:05:54.366 "reset": true, 00:05:54.366 "nvme_admin": false, 00:05:54.366 "nvme_io": false, 00:05:54.366 "nvme_io_md": false, 00:05:54.366 "write_zeroes": true, 00:05:54.366 "zcopy": true, 00:05:54.366 "get_zone_info": false, 00:05:54.366 "zone_management": false, 00:05:54.366 "zone_append": false, 00:05:54.366 "compare": false, 00:05:54.366 "compare_and_write": false, 00:05:54.366 "abort": true, 00:05:54.366 "seek_hole": false, 00:05:54.366 "seek_data": false, 00:05:54.366 "copy": true, 00:05:54.366 "nvme_iov_md": false 00:05:54.366 }, 00:05:54.366 "memory_domains": [ 00:05:54.366 { 00:05:54.366 "dma_device_id": "system", 00:05:54.366 "dma_device_type": 1 00:05:54.366 }, 00:05:54.366 { 00:05:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.366 "dma_device_type": 2 00:05:54.366 } 00:05:54.366 ], 00:05:54.366 "driver_specific": {} 00:05:54.366 }, 00:05:54.366 { 00:05:54.366 "name": "Passthru0", 00:05:54.366 "aliases": [ 00:05:54.366 "c124804b-6de0-52ea-96df-275d8a122c09" 00:05:54.366 ], 00:05:54.366 "product_name": "passthru", 00:05:54.366 "block_size": 512, 00:05:54.366 "num_blocks": 16384, 00:05:54.366 "uuid": "c124804b-6de0-52ea-96df-275d8a122c09", 00:05:54.366 "assigned_rate_limits": { 00:05:54.366 "rw_ios_per_sec": 0, 00:05:54.366 "rw_mbytes_per_sec": 0, 00:05:54.366 "r_mbytes_per_sec": 0, 00:05:54.366 "w_mbytes_per_sec": 0 00:05:54.366 }, 00:05:54.366 "claimed": false, 00:05:54.366 "zoned": false, 00:05:54.366 "supported_io_types": { 00:05:54.366 "read": true, 00:05:54.366 "write": true, 00:05:54.366 "unmap": true, 00:05:54.366 "flush": true, 00:05:54.366 "reset": true, 00:05:54.366 "nvme_admin": false, 00:05:54.366 "nvme_io": false, 00:05:54.366 "nvme_io_md": false, 00:05:54.366 "write_zeroes": true, 00:05:54.366 "zcopy": true, 00:05:54.366 "get_zone_info": false, 00:05:54.367 "zone_management": false, 00:05:54.367 "zone_append": false, 00:05:54.367 "compare": false, 00:05:54.367 "compare_and_write": false, 00:05:54.367 "abort": true, 00:05:54.367 "seek_hole": false, 00:05:54.367 "seek_data": false, 00:05:54.367 "copy": true, 00:05:54.367 "nvme_iov_md": false 00:05:54.367 }, 00:05:54.367 "memory_domains": [ 00:05:54.367 { 00:05:54.367 "dma_device_id": "system", 00:05:54.367 "dma_device_type": 1 00:05:54.367 }, 00:05:54.367 { 00:05:54.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.367 "dma_device_type": 2 00:05:54.367 } 00:05:54.367 ], 00:05:54.367 "driver_specific": { 00:05:54.367 "passthru": { 00:05:54.367 "name": "Passthru0", 00:05:54.367 "base_bdev_name": "Malloc2" 00:05:54.367 } 00:05:54.367 } 00:05:54.367 } 00:05:54.367 ]' 00:05:54.367 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.367 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.367 13:16:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.367 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.367 13:16:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.367 00:05:54.367 real 0m0.213s 00:05:54.367 user 0m0.139s 00:05:54.367 sys 0m0.019s 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.367 13:16:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.367 ************************************ 00:05:54.367 END TEST rpc_daemon_integrity 00:05:54.367 ************************************ 00:05:54.367 13:16:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:54.367 13:16:46 rpc -- rpc/rpc.sh@84 -- # killprocess 99798 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 99798 ']' 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@954 -- # kill -0 99798 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99798 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99798' 00:05:54.367 killing process with pid 99798 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@969 -- # kill 99798 00:05:54.367 13:16:46 rpc -- common/autotest_common.sh@974 -- # wait 99798 00:05:54.938 00:05:54.938 real 0m1.878s 00:05:54.938 user 0m2.339s 00:05:54.938 sys 0m0.592s 00:05:54.938 13:16:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.938 13:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.938 ************************************ 00:05:54.938 END TEST rpc 00:05:54.938 ************************************ 00:05:54.938 13:16:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.938 13:16:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.938 13:16:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.938 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.938 ************************************ 00:05:54.938 START TEST skip_rpc 00:05:54.938 ************************************ 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.938 * Looking for test storage... 00:05:54.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.938 13:16:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.938 --rc genhtml_branch_coverage=1 00:05:54.938 --rc genhtml_function_coverage=1 00:05:54.938 --rc genhtml_legend=1 00:05:54.938 --rc geninfo_all_blocks=1 00:05:54.938 --rc geninfo_unexecuted_blocks=1 00:05:54.938 00:05:54.938 ' 00:05:54.938 13:16:46 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.938 --rc genhtml_branch_coverage=1 00:05:54.938 --rc genhtml_function_coverage=1 00:05:54.938 --rc genhtml_legend=1 00:05:54.938 --rc geninfo_all_blocks=1 00:05:54.938 --rc geninfo_unexecuted_blocks=1 00:05:54.938 00:05:54.938 ' 00:05:54.939 13:16:46 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.939 --rc genhtml_branch_coverage=1 00:05:54.939 --rc genhtml_function_coverage=1 00:05:54.939 --rc genhtml_legend=1 00:05:54.939 --rc geninfo_all_blocks=1 00:05:54.939 --rc geninfo_unexecuted_blocks=1 00:05:54.939 00:05:54.939 ' 00:05:54.939 13:16:46 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.939 --rc genhtml_branch_coverage=1 00:05:54.939 --rc genhtml_function_coverage=1 00:05:54.939 --rc genhtml_legend=1 00:05:54.939 --rc geninfo_all_blocks=1 00:05:54.939 --rc geninfo_unexecuted_blocks=1 00:05:54.939 00:05:54.939 ' 00:05:54.939 13:16:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.939 13:16:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:54.939 13:16:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:54.939 13:16:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.939 13:16:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.939 13:16:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.939 ************************************ 00:05:54.939 START TEST skip_rpc 00:05:54.939 ************************************ 00:05:54.939 13:16:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:54.939 13:16:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=100130 00:05:54.939 13:16:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:54.939 13:16:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.939 13:16:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:54.939 [2024-10-14 13:16:46.758571] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:05:54.939 [2024-10-14 13:16:46.758651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100130 ] 00:05:55.199 [2024-10-14 13:16:46.816208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.199 [2024-10-14 13:16:46.865911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 100130 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 100130 ']' 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 100130 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100130 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100130' 00:06:00.489 killing process with pid 100130 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 100130 00:06:00.489 13:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 100130 00:06:00.489 00:06:00.489 real 0m5.409s 00:06:00.489 user 0m5.103s 00:06:00.489 sys 0m0.316s 00:06:00.489 13:16:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.489 13:16:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.489 ************************************ 00:06:00.489 END TEST skip_rpc 00:06:00.489 ************************************ 00:06:00.489 13:16:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:00.489 13:16:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.489 13:16:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.489 13:16:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.489 ************************************ 00:06:00.489 START TEST skip_rpc_with_json 00:06:00.489 ************************************ 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100817 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100817 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 100817 ']' 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.489 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.489 [2024-10-14 13:16:52.220856] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:00.489 [2024-10-14 13:16:52.220958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100817 ] 00:06:00.489 [2024-10-14 13:16:52.283385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.489 [2024-10-14 13:16:52.332634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.749 [2024-10-14 13:16:52.590703] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:00.749 request: 00:06:00.749 { 00:06:00.749 "trtype": "tcp", 00:06:00.749 "method": "nvmf_get_transports", 00:06:00.749 "req_id": 1 00:06:00.749 } 00:06:00.749 Got JSON-RPC error response 00:06:00.749 response: 00:06:00.749 { 00:06:00.749 "code": -19, 00:06:00.749 "message": "No such device" 00:06:00.749 } 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.749 [2024-10-14 13:16:52.598826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.749 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.008 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.008 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.008 { 00:06:01.008 "subsystems": [ 00:06:01.008 { 00:06:01.008 "subsystem": "fsdev", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "fsdev_set_opts", 00:06:01.008 "params": { 00:06:01.008 "fsdev_io_pool_size": 65535, 00:06:01.008 "fsdev_io_cache_size": 256 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "vfio_user_target", 00:06:01.008 "config": null 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "keyring", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "iobuf", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "iobuf_set_options", 00:06:01.008 "params": { 00:06:01.008 "small_pool_count": 8192, 00:06:01.008 "large_pool_count": 1024, 00:06:01.008 "small_bufsize": 8192, 00:06:01.008 "large_bufsize": 135168 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "sock", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "sock_set_default_impl", 00:06:01.008 "params": { 00:06:01.008 "impl_name": "posix" 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "sock_impl_set_options", 00:06:01.008 "params": { 00:06:01.008 "impl_name": "ssl", 00:06:01.008 "recv_buf_size": 4096, 00:06:01.008 "send_buf_size": 4096, 00:06:01.008 "enable_recv_pipe": true, 00:06:01.008 "enable_quickack": false, 00:06:01.008 "enable_placement_id": 0, 00:06:01.008 "enable_zerocopy_send_server": true, 00:06:01.008 "enable_zerocopy_send_client": false, 00:06:01.008 "zerocopy_threshold": 0, 00:06:01.008 "tls_version": 0, 00:06:01.008 "enable_ktls": false 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "sock_impl_set_options", 00:06:01.008 "params": { 00:06:01.008 "impl_name": "posix", 00:06:01.008 "recv_buf_size": 2097152, 00:06:01.008 "send_buf_size": 2097152, 00:06:01.008 "enable_recv_pipe": true, 00:06:01.008 "enable_quickack": false, 00:06:01.008 "enable_placement_id": 0, 00:06:01.008 "enable_zerocopy_send_server": true, 00:06:01.008 "enable_zerocopy_send_client": false, 00:06:01.008 "zerocopy_threshold": 0, 00:06:01.008 "tls_version": 0, 00:06:01.008 "enable_ktls": false 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "vmd", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "accel", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "accel_set_options", 00:06:01.008 "params": { 00:06:01.008 "small_cache_size": 128, 00:06:01.008 "large_cache_size": 16, 00:06:01.008 "task_count": 2048, 00:06:01.008 "sequence_count": 2048, 00:06:01.008 "buf_count": 2048 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "bdev", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "bdev_set_options", 00:06:01.008 "params": { 00:06:01.008 "bdev_io_pool_size": 65535, 00:06:01.008 "bdev_io_cache_size": 256, 00:06:01.008 "bdev_auto_examine": true, 00:06:01.008 "iobuf_small_cache_size": 128, 00:06:01.008 "iobuf_large_cache_size": 16 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "bdev_raid_set_options", 00:06:01.008 "params": { 00:06:01.008 "process_window_size_kb": 1024, 00:06:01.008 "process_max_bandwidth_mb_sec": 0 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "bdev_iscsi_set_options", 00:06:01.008 "params": { 00:06:01.008 "timeout_sec": 30 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "bdev_nvme_set_options", 00:06:01.008 "params": { 00:06:01.008 "action_on_timeout": "none", 00:06:01.008 "timeout_us": 0, 00:06:01.008 "timeout_admin_us": 0, 00:06:01.008 "keep_alive_timeout_ms": 10000, 00:06:01.008 "arbitration_burst": 0, 00:06:01.008 "low_priority_weight": 0, 00:06:01.008 "medium_priority_weight": 0, 00:06:01.008 "high_priority_weight": 0, 00:06:01.008 "nvme_adminq_poll_period_us": 10000, 00:06:01.008 "nvme_ioq_poll_period_us": 0, 00:06:01.008 "io_queue_requests": 0, 00:06:01.008 "delay_cmd_submit": true, 00:06:01.008 "transport_retry_count": 4, 00:06:01.008 "bdev_retry_count": 3, 00:06:01.008 "transport_ack_timeout": 0, 00:06:01.008 "ctrlr_loss_timeout_sec": 0, 00:06:01.008 "reconnect_delay_sec": 0, 00:06:01.008 "fast_io_fail_timeout_sec": 0, 00:06:01.008 "disable_auto_failback": false, 00:06:01.008 "generate_uuids": false, 00:06:01.008 "transport_tos": 0, 00:06:01.008 "nvme_error_stat": false, 00:06:01.008 "rdma_srq_size": 0, 00:06:01.008 "io_path_stat": false, 00:06:01.008 "allow_accel_sequence": false, 00:06:01.008 "rdma_max_cq_size": 0, 00:06:01.008 "rdma_cm_event_timeout_ms": 0, 00:06:01.008 "dhchap_digests": [ 00:06:01.008 "sha256", 00:06:01.008 "sha384", 00:06:01.008 "sha512" 00:06:01.008 ], 00:06:01.008 "dhchap_dhgroups": [ 00:06:01.008 "null", 00:06:01.008 "ffdhe2048", 00:06:01.008 "ffdhe3072", 00:06:01.008 "ffdhe4096", 00:06:01.008 "ffdhe6144", 00:06:01.008 "ffdhe8192" 00:06:01.008 ] 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "bdev_nvme_set_hotplug", 00:06:01.008 "params": { 00:06:01.008 "period_us": 100000, 00:06:01.008 "enable": false 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "bdev_wait_for_examine" 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "scsi", 00:06:01.008 "config": null 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "scheduler", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "framework_set_scheduler", 00:06:01.008 "params": { 00:06:01.008 "name": "static" 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "vhost_scsi", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "vhost_blk", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "ublk", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "nbd", 00:06:01.008 "config": [] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "nvmf", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "nvmf_set_config", 00:06:01.008 "params": { 00:06:01.008 "discovery_filter": "match_any", 00:06:01.008 "admin_cmd_passthru": { 00:06:01.008 "identify_ctrlr": false 00:06:01.008 }, 00:06:01.008 "dhchap_digests": [ 00:06:01.008 "sha256", 00:06:01.008 "sha384", 00:06:01.008 "sha512" 00:06:01.008 ], 00:06:01.008 "dhchap_dhgroups": [ 00:06:01.008 "null", 00:06:01.008 "ffdhe2048", 00:06:01.008 "ffdhe3072", 00:06:01.008 "ffdhe4096", 00:06:01.008 "ffdhe6144", 00:06:01.008 "ffdhe8192" 00:06:01.008 ] 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "nvmf_set_max_subsystems", 00:06:01.008 "params": { 00:06:01.008 "max_subsystems": 1024 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "nvmf_set_crdt", 00:06:01.008 "params": { 00:06:01.008 "crdt1": 0, 00:06:01.008 "crdt2": 0, 00:06:01.008 "crdt3": 0 00:06:01.008 } 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "method": "nvmf_create_transport", 00:06:01.008 "params": { 00:06:01.008 "trtype": "TCP", 00:06:01.008 "max_queue_depth": 128, 00:06:01.008 "max_io_qpairs_per_ctrlr": 127, 00:06:01.008 "in_capsule_data_size": 4096, 00:06:01.008 "max_io_size": 131072, 00:06:01.008 "io_unit_size": 131072, 00:06:01.008 "max_aq_depth": 128, 00:06:01.008 "num_shared_buffers": 511, 00:06:01.008 "buf_cache_size": 4294967295, 00:06:01.008 "dif_insert_or_strip": false, 00:06:01.008 "zcopy": false, 00:06:01.008 "c2h_success": true, 00:06:01.008 "sock_priority": 0, 00:06:01.008 "abort_timeout_sec": 1, 00:06:01.008 "ack_timeout": 0, 00:06:01.008 "data_wr_pool_size": 0 00:06:01.008 } 00:06:01.008 } 00:06:01.008 ] 00:06:01.008 }, 00:06:01.008 { 00:06:01.008 "subsystem": "iscsi", 00:06:01.008 "config": [ 00:06:01.008 { 00:06:01.008 "method": "iscsi_set_options", 00:06:01.008 "params": { 00:06:01.008 "node_base": "iqn.2016-06.io.spdk", 00:06:01.008 "max_sessions": 128, 00:06:01.008 "max_connections_per_session": 2, 00:06:01.009 "max_queue_depth": 64, 00:06:01.009 "default_time2wait": 2, 00:06:01.009 "default_time2retain": 20, 00:06:01.009 "first_burst_length": 8192, 00:06:01.009 "immediate_data": true, 00:06:01.009 "allow_duplicated_isid": false, 00:06:01.009 "error_recovery_level": 0, 00:06:01.009 "nop_timeout": 60, 00:06:01.009 "nop_in_interval": 30, 00:06:01.009 "disable_chap": false, 00:06:01.009 "require_chap": false, 00:06:01.009 "mutual_chap": false, 00:06:01.009 "chap_group": 0, 00:06:01.009 "max_large_datain_per_connection": 64, 00:06:01.009 "max_r2t_per_connection": 4, 00:06:01.009 "pdu_pool_size": 36864, 00:06:01.009 "immediate_data_pool_size": 16384, 00:06:01.009 "data_out_pool_size": 2048 00:06:01.009 } 00:06:01.009 } 00:06:01.009 ] 00:06:01.009 } 00:06:01.009 ] 00:06:01.009 } 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100817 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 100817 ']' 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 100817 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100817 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100817' 00:06:01.009 killing process with pid 100817 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 100817 00:06:01.009 13:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 100817 00:06:01.576 13:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100957 00:06:01.576 13:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.576 13:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 100957 ']' 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100957' 00:06:06.861 killing process with pid 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 100957 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.861 00:06:06.861 real 0m6.420s 00:06:06.861 user 0m6.063s 00:06:06.861 sys 0m0.691s 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.861 ************************************ 00:06:06.861 END TEST skip_rpc_with_json 00:06:06.861 ************************************ 00:06:06.861 13:16:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:06.861 13:16:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.861 13:16:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.861 13:16:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.861 ************************************ 00:06:06.861 START TEST skip_rpc_with_delay 00:06:06.861 ************************************ 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.861 [2024-10-14 13:16:58.694945] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.861 00:06:06.861 real 0m0.075s 00:06:06.861 user 0m0.045s 00:06:06.861 sys 0m0.029s 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.861 13:16:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:06.861 ************************************ 00:06:06.861 END TEST skip_rpc_with_delay 00:06:06.861 ************************************ 00:06:07.123 13:16:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:07.123 13:16:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:07.123 13:16:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:07.123 13:16:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.123 13:16:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.123 13:16:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.123 ************************************ 00:06:07.123 START TEST exit_on_failed_rpc_init 00:06:07.123 ************************************ 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101675 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101675 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 101675 ']' 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.123 13:16:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.123 [2024-10-14 13:16:58.820869] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:07.123 [2024-10-14 13:16:58.820972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101675 ] 00:06:07.123 [2024-10-14 13:16:58.879932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.123 [2024-10-14 13:16:58.930085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.383 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:07.384 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.643 [2024-10-14 13:16:59.242372] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:07.643 [2024-10-14 13:16:59.242492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101681 ] 00:06:07.643 [2024-10-14 13:16:59.302208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.643 [2024-10-14 13:16:59.348621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.643 [2024-10-14 13:16:59.348744] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:07.644 [2024-10-14 13:16:59.348762] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:07.644 [2024-10-14 13:16:59.348780] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101675 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 101675 ']' 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 101675 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101675 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101675' 00:06:07.644 killing process with pid 101675 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 101675 00:06:07.644 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 101675 00:06:08.212 00:06:08.212 real 0m1.038s 00:06:08.212 user 0m1.120s 00:06:08.212 sys 0m0.425s 00:06:08.212 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.212 13:16:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.212 ************************************ 00:06:08.212 END TEST exit_on_failed_rpc_init 00:06:08.212 ************************************ 00:06:08.212 13:16:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.212 00:06:08.212 real 0m13.288s 00:06:08.213 user 0m12.509s 00:06:08.213 sys 0m1.649s 00:06:08.213 13:16:59 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.213 13:16:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.213 ************************************ 00:06:08.213 END TEST skip_rpc 00:06:08.213 ************************************ 00:06:08.213 13:16:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.213 13:16:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.213 13:16:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.213 13:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.213 ************************************ 00:06:08.213 START TEST rpc_client 00:06:08.213 ************************************ 00:06:08.213 13:16:59 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.213 * Looking for test storage... 00:06:08.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:08.213 13:16:59 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.213 13:16:59 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.213 13:16:59 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.213 13:17:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.213 --rc genhtml_branch_coverage=1 00:06:08.213 --rc genhtml_function_coverage=1 00:06:08.213 --rc genhtml_legend=1 00:06:08.213 --rc geninfo_all_blocks=1 00:06:08.213 --rc geninfo_unexecuted_blocks=1 00:06:08.213 00:06:08.213 ' 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.213 --rc genhtml_branch_coverage=1 00:06:08.213 --rc genhtml_function_coverage=1 00:06:08.213 --rc genhtml_legend=1 00:06:08.213 --rc geninfo_all_blocks=1 00:06:08.213 --rc geninfo_unexecuted_blocks=1 00:06:08.213 00:06:08.213 ' 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.213 --rc genhtml_branch_coverage=1 00:06:08.213 --rc genhtml_function_coverage=1 00:06:08.213 --rc genhtml_legend=1 00:06:08.213 --rc geninfo_all_blocks=1 00:06:08.213 --rc geninfo_unexecuted_blocks=1 00:06:08.213 00:06:08.213 ' 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.213 --rc genhtml_branch_coverage=1 00:06:08.213 --rc genhtml_function_coverage=1 00:06:08.213 --rc genhtml_legend=1 00:06:08.213 --rc geninfo_all_blocks=1 00:06:08.213 --rc geninfo_unexecuted_blocks=1 00:06:08.213 00:06:08.213 ' 00:06:08.213 13:17:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:08.213 OK 00:06:08.213 13:17:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.213 00:06:08.213 real 0m0.165s 00:06:08.213 user 0m0.103s 00:06:08.213 sys 0m0.071s 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.213 13:17:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:08.213 ************************************ 00:06:08.213 END TEST rpc_client 00:06:08.213 ************************************ 00:06:08.213 13:17:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.213 13:17:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.213 13:17:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.213 13:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.473 ************************************ 00:06:08.473 START TEST json_config 00:06:08.473 ************************************ 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.473 13:17:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.473 13:17:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.473 13:17:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.473 13:17:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.473 13:17:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.473 13:17:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:08.473 13:17:00 json_config -- scripts/common.sh@345 -- # : 1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.473 13:17:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.473 13:17:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@353 -- # local d=1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.473 13:17:00 json_config -- scripts/common.sh@355 -- # echo 1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.473 13:17:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@353 -- # local d=2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.473 13:17:00 json_config -- scripts/common.sh@355 -- # echo 2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.473 13:17:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.473 13:17:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.473 13:17:00 json_config -- scripts/common.sh@368 -- # return 0 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.473 --rc genhtml_branch_coverage=1 00:06:08.473 --rc genhtml_function_coverage=1 00:06:08.473 --rc genhtml_legend=1 00:06:08.473 --rc geninfo_all_blocks=1 00:06:08.473 --rc geninfo_unexecuted_blocks=1 00:06:08.473 00:06:08.473 ' 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.473 --rc genhtml_branch_coverage=1 00:06:08.473 --rc genhtml_function_coverage=1 00:06:08.473 --rc genhtml_legend=1 00:06:08.473 --rc geninfo_all_blocks=1 00:06:08.473 --rc geninfo_unexecuted_blocks=1 00:06:08.473 00:06:08.473 ' 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.473 --rc genhtml_branch_coverage=1 00:06:08.473 --rc genhtml_function_coverage=1 00:06:08.473 --rc genhtml_legend=1 00:06:08.473 --rc geninfo_all_blocks=1 00:06:08.473 --rc geninfo_unexecuted_blocks=1 00:06:08.473 00:06:08.473 ' 00:06:08.473 13:17:00 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.473 --rc genhtml_branch_coverage=1 00:06:08.473 --rc genhtml_function_coverage=1 00:06:08.473 --rc genhtml_legend=1 00:06:08.473 --rc geninfo_all_blocks=1 00:06:08.473 --rc geninfo_unexecuted_blocks=1 00:06:08.473 00:06:08.473 ' 00:06:08.473 13:17:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.473 13:17:00 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.473 13:17:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.473 13:17:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.473 13:17:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.473 13:17:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.473 13:17:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.473 13:17:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.473 13:17:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.473 13:17:00 json_config -- paths/export.sh@5 -- # export PATH 00:06:08.474 13:17:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@51 -- # : 0 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.474 13:17:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:08.474 INFO: JSON configuration test init 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.474 13:17:00 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:08.474 13:17:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.474 13:17:00 json_config -- json_config/common.sh@10 -- # shift 00:06:08.474 13:17:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.474 13:17:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.474 13:17:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.474 13:17:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.474 13:17:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.474 13:17:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101939 00:06:08.474 13:17:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.474 Waiting for target to run... 00:06:08.474 13:17:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:08.474 13:17:00 json_config -- json_config/common.sh@25 -- # waitforlisten 101939 /var/tmp/spdk_tgt.sock 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 101939 ']' 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.474 13:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.474 [2024-10-14 13:17:00.301174] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:08.474 [2024-10-14 13:17:00.301275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101939 ] 00:06:09.043 [2024-10-14 13:17:00.821881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.043 [2024-10-14 13:17:00.865168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:09.611 13:17:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.611 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.611 13:17:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:09.611 13:17:01 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:09.611 13:17:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:12.911 13:17:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:12.911 13:17:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:12.911 13:17:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:12.911 13:17:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@54 -- # sort 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:13.171 13:17:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.171 13:17:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:13.171 13:17:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.171 13:17:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:13.171 13:17:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.171 13:17:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.430 MallocForNvmf0 00:06:13.430 13:17:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.430 13:17:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.688 MallocForNvmf1 00:06:13.688 13:17:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.688 13:17:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.947 [2024-10-14 13:17:05.587306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.947 13:17:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.947 13:17:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.205 13:17:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.205 13:17:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.464 13:17:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.464 13:17:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.723 13:17:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.723 13:17:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.982 [2024-10-14 13:17:06.654663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.982 13:17:06 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:14.982 13:17:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.982 13:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.982 13:17:06 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:14.982 13:17:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.982 13:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.982 13:17:06 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:14.982 13:17:06 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.982 13:17:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:15.240 MallocBdevForConfigChangeCheck 00:06:15.240 13:17:06 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:15.240 13:17:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.240 13:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.240 13:17:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:15.240 13:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.811 13:17:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:15.811 INFO: shutting down applications... 00:06:15.811 13:17:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:15.811 13:17:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:15.811 13:17:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:15.811 13:17:07 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.752 Calling clear_iscsi_subsystem 00:06:17.752 Calling clear_nvmf_subsystem 00:06:17.752 Calling clear_nbd_subsystem 00:06:17.752 Calling clear_ublk_subsystem 00:06:17.752 Calling clear_vhost_blk_subsystem 00:06:17.752 Calling clear_vhost_scsi_subsystem 00:06:17.752 Calling clear_bdev_subsystem 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@352 -- # break 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:17.752 13:17:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:17.752 13:17:09 json_config -- json_config/common.sh@31 -- # local app=target 00:06:17.752 13:17:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:17.752 13:17:09 json_config -- json_config/common.sh@35 -- # [[ -n 101939 ]] 00:06:17.752 13:17:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101939 00:06:17.752 13:17:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:17.752 13:17:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.752 13:17:09 json_config -- json_config/common.sh@41 -- # kill -0 101939 00:06:17.752 13:17:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.324 13:17:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.324 13:17:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.324 13:17:10 json_config -- json_config/common.sh@41 -- # kill -0 101939 00:06:18.324 13:17:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.324 13:17:10 json_config -- json_config/common.sh@43 -- # break 00:06:18.324 13:17:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.324 13:17:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.324 SPDK target shutdown done 00:06:18.324 13:17:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:18.324 INFO: relaunching applications... 00:06:18.324 13:17:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.324 13:17:10 json_config -- json_config/common.sh@9 -- # local app=target 00:06:18.324 13:17:10 json_config -- json_config/common.sh@10 -- # shift 00:06:18.324 13:17:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.324 13:17:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.324 13:17:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.324 13:17:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.324 13:17:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.324 13:17:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=103264 00:06:18.324 13:17:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.324 Waiting for target to run... 00:06:18.324 13:17:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.324 13:17:10 json_config -- json_config/common.sh@25 -- # waitforlisten 103264 /var/tmp/spdk_tgt.sock 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 103264 ']' 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.324 13:17:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.324 [2024-10-14 13:17:10.090331] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:18.324 [2024-10-14 13:17:10.090423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103264 ] 00:06:18.585 [2024-10-14 13:17:10.421061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.847 [2024-10-14 13:17:10.453948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.143 [2024-10-14 13:17:13.492367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.143 [2024-10-14 13:17:13.524800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.143 13:17:13 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.143 13:17:13 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:22.143 13:17:13 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.143 00:06:22.143 13:17:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:22.143 13:17:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:22.143 INFO: Checking if target configuration is the same... 00:06:22.143 13:17:13 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.143 13:17:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:22.143 13:17:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.143 + '[' 2 -ne 2 ']' 00:06:22.143 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.143 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.143 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.143 +++ basename /dev/fd/62 00:06:22.143 ++ mktemp /tmp/62.XXX 00:06:22.143 + tmp_file_1=/tmp/62.xDq 00:06:22.143 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.143 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.143 + tmp_file_2=/tmp/spdk_tgt_config.json.fcr 00:06:22.143 + ret=0 00:06:22.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.402 + diff -u /tmp/62.xDq /tmp/spdk_tgt_config.json.fcr 00:06:22.402 + echo 'INFO: JSON config files are the same' 00:06:22.402 INFO: JSON config files are the same 00:06:22.402 + rm /tmp/62.xDq /tmp/spdk_tgt_config.json.fcr 00:06:22.402 + exit 0 00:06:22.402 13:17:14 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:22.402 13:17:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:22.402 INFO: changing configuration and checking if this can be detected... 00:06:22.402 13:17:14 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.402 13:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.660 13:17:14 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.660 13:17:14 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:22.660 13:17:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.660 + '[' 2 -ne 2 ']' 00:06:22.660 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.660 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.660 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.660 +++ basename /dev/fd/62 00:06:22.660 ++ mktemp /tmp/62.XXX 00:06:22.660 + tmp_file_1=/tmp/62.AUv 00:06:22.660 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.660 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.660 + tmp_file_2=/tmp/spdk_tgt_config.json.l0K 00:06:22.660 + ret=0 00:06:22.660 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.919 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.919 + diff -u /tmp/62.AUv /tmp/spdk_tgt_config.json.l0K 00:06:22.919 + ret=1 00:06:22.919 + echo '=== Start of file: /tmp/62.AUv ===' 00:06:22.919 + cat /tmp/62.AUv 00:06:22.919 + echo '=== End of file: /tmp/62.AUv ===' 00:06:22.919 + echo '' 00:06:22.919 + echo '=== Start of file: /tmp/spdk_tgt_config.json.l0K ===' 00:06:22.919 + cat /tmp/spdk_tgt_config.json.l0K 00:06:22.919 + echo '=== End of file: /tmp/spdk_tgt_config.json.l0K ===' 00:06:22.919 + echo '' 00:06:22.919 + rm /tmp/62.AUv /tmp/spdk_tgt_config.json.l0K 00:06:22.919 + exit 1 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:22.919 INFO: configuration change detected. 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:22.919 13:17:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.919 13:17:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 103264 ]] 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.919 13:17:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.919 13:17:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:22.919 13:17:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:23.180 13:17:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:23.180 13:17:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:23.180 13:17:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:23.180 13:17:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.180 13:17:14 json_config -- json_config/json_config.sh@330 -- # killprocess 103264 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@950 -- # '[' -z 103264 ']' 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@954 -- # kill -0 103264 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@955 -- # uname 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103264 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103264' 00:06:23.180 killing process with pid 103264 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@969 -- # kill 103264 00:06:23.180 13:17:14 json_config -- common/autotest_common.sh@974 -- # wait 103264 00:06:24.561 13:17:16 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.561 13:17:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:24.561 13:17:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.561 13:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.561 13:17:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:24.561 13:17:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:24.561 INFO: Success 00:06:24.820 00:06:24.820 real 0m16.324s 00:06:24.820 user 0m18.492s 00:06:24.820 sys 0m2.031s 00:06:24.820 13:17:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.820 13:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.820 ************************************ 00:06:24.820 END TEST json_config 00:06:24.820 ************************************ 00:06:24.820 13:17:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:24.820 13:17:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.820 13:17:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.820 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.820 ************************************ 00:06:24.820 START TEST json_config_extra_key 00:06:24.820 ************************************ 00:06:24.820 13:17:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.821 --rc genhtml_branch_coverage=1 00:06:24.821 --rc genhtml_function_coverage=1 00:06:24.821 --rc genhtml_legend=1 00:06:24.821 --rc geninfo_all_blocks=1 00:06:24.821 --rc geninfo_unexecuted_blocks=1 00:06:24.821 00:06:24.821 ' 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.821 --rc genhtml_branch_coverage=1 00:06:24.821 --rc genhtml_function_coverage=1 00:06:24.821 --rc genhtml_legend=1 00:06:24.821 --rc geninfo_all_blocks=1 00:06:24.821 --rc geninfo_unexecuted_blocks=1 00:06:24.821 00:06:24.821 ' 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.821 --rc genhtml_branch_coverage=1 00:06:24.821 --rc genhtml_function_coverage=1 00:06:24.821 --rc genhtml_legend=1 00:06:24.821 --rc geninfo_all_blocks=1 00:06:24.821 --rc geninfo_unexecuted_blocks=1 00:06:24.821 00:06:24.821 ' 00:06:24.821 13:17:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.821 --rc genhtml_branch_coverage=1 00:06:24.821 --rc genhtml_function_coverage=1 00:06:24.821 --rc genhtml_legend=1 00:06:24.821 --rc geninfo_all_blocks=1 00:06:24.821 --rc geninfo_unexecuted_blocks=1 00:06:24.821 00:06:24.821 ' 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.821 13:17:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.821 13:17:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.821 13:17:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.821 13:17:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.821 13:17:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:24.821 13:17:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.821 13:17:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:24.821 INFO: launching applications... 00:06:24.821 13:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.821 13:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.822 13:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.822 13:17:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=104177 00:06:24.822 13:17:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:24.822 13:17:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.822 Waiting for target to run... 00:06:24.822 13:17:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 104177 /var/tmp/spdk_tgt.sock 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 104177 ']' 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.822 13:17:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.822 [2024-10-14 13:17:16.655346] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:24.822 [2024-10-14 13:17:16.655427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104177 ] 00:06:25.391 [2024-10-14 13:17:17.144170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.391 [2024-10-14 13:17:17.185698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.961 13:17:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.961 13:17:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:25.961 00:06:25.961 13:17:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:25.961 INFO: shutting down applications... 00:06:25.961 13:17:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 104177 ]] 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 104177 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104177 00:06:25.961 13:17:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 104177 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.531 13:17:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.531 SPDK target shutdown done 00:06:26.531 13:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.531 Success 00:06:26.531 00:06:26.531 real 0m1.679s 00:06:26.531 user 0m1.503s 00:06:26.531 sys 0m0.612s 00:06:26.531 13:17:18 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.531 13:17:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.531 ************************************ 00:06:26.531 END TEST json_config_extra_key 00:06:26.531 ************************************ 00:06:26.531 13:17:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.531 13:17:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.531 13:17:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.531 13:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:26.531 ************************************ 00:06:26.531 START TEST alias_rpc 00:06:26.531 ************************************ 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.531 * Looking for test storage... 00:06:26.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.531 13:17:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.531 --rc genhtml_branch_coverage=1 00:06:26.531 --rc genhtml_function_coverage=1 00:06:26.531 --rc genhtml_legend=1 00:06:26.531 --rc geninfo_all_blocks=1 00:06:26.531 --rc geninfo_unexecuted_blocks=1 00:06:26.531 00:06:26.531 ' 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.531 --rc genhtml_branch_coverage=1 00:06:26.531 --rc genhtml_function_coverage=1 00:06:26.531 --rc genhtml_legend=1 00:06:26.531 --rc geninfo_all_blocks=1 00:06:26.531 --rc geninfo_unexecuted_blocks=1 00:06:26.531 00:06:26.531 ' 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.531 --rc genhtml_branch_coverage=1 00:06:26.531 --rc genhtml_function_coverage=1 00:06:26.531 --rc genhtml_legend=1 00:06:26.531 --rc geninfo_all_blocks=1 00:06:26.531 --rc geninfo_unexecuted_blocks=1 00:06:26.531 00:06:26.531 ' 00:06:26.531 13:17:18 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.531 --rc genhtml_branch_coverage=1 00:06:26.531 --rc genhtml_function_coverage=1 00:06:26.531 --rc genhtml_legend=1 00:06:26.531 --rc geninfo_all_blocks=1 00:06:26.531 --rc geninfo_unexecuted_blocks=1 00:06:26.531 00:06:26.531 ' 00:06:26.531 13:17:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.531 13:17:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104383 00:06:26.531 13:17:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.531 13:17:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104383 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 104383 ']' 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.532 13:17:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.793 [2024-10-14 13:17:18.391642] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:26.793 [2024-10-14 13:17:18.391739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104383 ] 00:06:26.793 [2024-10-14 13:17:18.452718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.793 [2024-10-14 13:17:18.498963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.053 13:17:18 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.053 13:17:18 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.053 13:17:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:27.313 13:17:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104383 00:06:27.313 13:17:19 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 104383 ']' 00:06:27.313 13:17:19 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 104383 00:06:27.313 13:17:19 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:27.313 13:17:19 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104383 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104383' 00:06:27.314 killing process with pid 104383 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@969 -- # kill 104383 00:06:27.314 13:17:19 alias_rpc -- common/autotest_common.sh@974 -- # wait 104383 00:06:27.884 00:06:27.884 real 0m1.250s 00:06:27.884 user 0m1.383s 00:06:27.884 sys 0m0.423s 00:06:27.884 13:17:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.884 13:17:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.884 ************************************ 00:06:27.884 END TEST alias_rpc 00:06:27.884 ************************************ 00:06:27.884 13:17:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:27.884 13:17:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.884 13:17:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.884 13:17:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.884 13:17:19 -- common/autotest_common.sh@10 -- # set +x 00:06:27.884 ************************************ 00:06:27.884 START TEST spdkcli_tcp 00:06:27.884 ************************************ 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:27.884 * Looking for test storage... 00:06:27.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.884 13:17:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.884 --rc genhtml_branch_coverage=1 00:06:27.884 --rc genhtml_function_coverage=1 00:06:27.884 --rc genhtml_legend=1 00:06:27.884 --rc geninfo_all_blocks=1 00:06:27.884 --rc geninfo_unexecuted_blocks=1 00:06:27.884 00:06:27.884 ' 00:06:27.884 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.884 --rc genhtml_branch_coverage=1 00:06:27.884 --rc genhtml_function_coverage=1 00:06:27.884 --rc genhtml_legend=1 00:06:27.884 --rc geninfo_all_blocks=1 00:06:27.884 --rc geninfo_unexecuted_blocks=1 00:06:27.885 00:06:27.885 ' 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.885 --rc genhtml_branch_coverage=1 00:06:27.885 --rc genhtml_function_coverage=1 00:06:27.885 --rc genhtml_legend=1 00:06:27.885 --rc geninfo_all_blocks=1 00:06:27.885 --rc geninfo_unexecuted_blocks=1 00:06:27.885 00:06:27.885 ' 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.885 --rc genhtml_branch_coverage=1 00:06:27.885 --rc genhtml_function_coverage=1 00:06:27.885 --rc genhtml_legend=1 00:06:27.885 --rc geninfo_all_blocks=1 00:06:27.885 --rc geninfo_unexecuted_blocks=1 00:06:27.885 00:06:27.885 ' 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104693 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.885 13:17:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104693 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 104693 ']' 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.885 13:17:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.885 [2024-10-14 13:17:19.691959] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:27.885 [2024-10-14 13:17:19.692048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104693 ] 00:06:28.144 [2024-10-14 13:17:19.750264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.145 [2024-10-14 13:17:19.798816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.145 [2024-10-14 13:17:19.798819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.404 13:17:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.404 13:17:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:28.404 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104703 00:06:28.404 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.404 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:28.664 [ 00:06:28.664 "bdev_malloc_delete", 00:06:28.664 "bdev_malloc_create", 00:06:28.664 "bdev_null_resize", 00:06:28.664 "bdev_null_delete", 00:06:28.664 "bdev_null_create", 00:06:28.664 "bdev_nvme_cuse_unregister", 00:06:28.664 "bdev_nvme_cuse_register", 00:06:28.664 "bdev_opal_new_user", 00:06:28.664 "bdev_opal_set_lock_state", 00:06:28.664 "bdev_opal_delete", 00:06:28.664 "bdev_opal_get_info", 00:06:28.664 "bdev_opal_create", 00:06:28.664 "bdev_nvme_opal_revert", 00:06:28.664 "bdev_nvme_opal_init", 00:06:28.664 "bdev_nvme_send_cmd", 00:06:28.664 "bdev_nvme_set_keys", 00:06:28.664 "bdev_nvme_get_path_iostat", 00:06:28.664 "bdev_nvme_get_mdns_discovery_info", 00:06:28.664 "bdev_nvme_stop_mdns_discovery", 00:06:28.664 "bdev_nvme_start_mdns_discovery", 00:06:28.664 "bdev_nvme_set_multipath_policy", 00:06:28.664 "bdev_nvme_set_preferred_path", 00:06:28.664 "bdev_nvme_get_io_paths", 00:06:28.664 "bdev_nvme_remove_error_injection", 00:06:28.664 "bdev_nvme_add_error_injection", 00:06:28.664 "bdev_nvme_get_discovery_info", 00:06:28.664 "bdev_nvme_stop_discovery", 00:06:28.664 "bdev_nvme_start_discovery", 00:06:28.664 "bdev_nvme_get_controller_health_info", 00:06:28.664 "bdev_nvme_disable_controller", 00:06:28.664 "bdev_nvme_enable_controller", 00:06:28.664 "bdev_nvme_reset_controller", 00:06:28.664 "bdev_nvme_get_transport_statistics", 00:06:28.664 "bdev_nvme_apply_firmware", 00:06:28.664 "bdev_nvme_detach_controller", 00:06:28.664 "bdev_nvme_get_controllers", 00:06:28.664 "bdev_nvme_attach_controller", 00:06:28.664 "bdev_nvme_set_hotplug", 00:06:28.664 "bdev_nvme_set_options", 00:06:28.664 "bdev_passthru_delete", 00:06:28.664 "bdev_passthru_create", 00:06:28.664 "bdev_lvol_set_parent_bdev", 00:06:28.664 "bdev_lvol_set_parent", 00:06:28.664 "bdev_lvol_check_shallow_copy", 00:06:28.664 "bdev_lvol_start_shallow_copy", 00:06:28.664 "bdev_lvol_grow_lvstore", 00:06:28.664 "bdev_lvol_get_lvols", 00:06:28.664 "bdev_lvol_get_lvstores", 00:06:28.664 "bdev_lvol_delete", 00:06:28.664 "bdev_lvol_set_read_only", 00:06:28.664 "bdev_lvol_resize", 00:06:28.664 "bdev_lvol_decouple_parent", 00:06:28.664 "bdev_lvol_inflate", 00:06:28.664 "bdev_lvol_rename", 00:06:28.664 "bdev_lvol_clone_bdev", 00:06:28.664 "bdev_lvol_clone", 00:06:28.664 "bdev_lvol_snapshot", 00:06:28.664 "bdev_lvol_create", 00:06:28.664 "bdev_lvol_delete_lvstore", 00:06:28.664 "bdev_lvol_rename_lvstore", 00:06:28.664 "bdev_lvol_create_lvstore", 00:06:28.664 "bdev_raid_set_options", 00:06:28.664 "bdev_raid_remove_base_bdev", 00:06:28.664 "bdev_raid_add_base_bdev", 00:06:28.664 "bdev_raid_delete", 00:06:28.664 "bdev_raid_create", 00:06:28.664 "bdev_raid_get_bdevs", 00:06:28.664 "bdev_error_inject_error", 00:06:28.664 "bdev_error_delete", 00:06:28.664 "bdev_error_create", 00:06:28.664 "bdev_split_delete", 00:06:28.664 "bdev_split_create", 00:06:28.664 "bdev_delay_delete", 00:06:28.664 "bdev_delay_create", 00:06:28.664 "bdev_delay_update_latency", 00:06:28.664 "bdev_zone_block_delete", 00:06:28.664 "bdev_zone_block_create", 00:06:28.664 "blobfs_create", 00:06:28.664 "blobfs_detect", 00:06:28.664 "blobfs_set_cache_size", 00:06:28.664 "bdev_aio_delete", 00:06:28.664 "bdev_aio_rescan", 00:06:28.664 "bdev_aio_create", 00:06:28.664 "bdev_ftl_set_property", 00:06:28.664 "bdev_ftl_get_properties", 00:06:28.664 "bdev_ftl_get_stats", 00:06:28.664 "bdev_ftl_unmap", 00:06:28.664 "bdev_ftl_unload", 00:06:28.664 "bdev_ftl_delete", 00:06:28.664 "bdev_ftl_load", 00:06:28.664 "bdev_ftl_create", 00:06:28.664 "bdev_virtio_attach_controller", 00:06:28.664 "bdev_virtio_scsi_get_devices", 00:06:28.664 "bdev_virtio_detach_controller", 00:06:28.664 "bdev_virtio_blk_set_hotplug", 00:06:28.664 "bdev_iscsi_delete", 00:06:28.664 "bdev_iscsi_create", 00:06:28.664 "bdev_iscsi_set_options", 00:06:28.664 "accel_error_inject_error", 00:06:28.664 "ioat_scan_accel_module", 00:06:28.664 "dsa_scan_accel_module", 00:06:28.664 "iaa_scan_accel_module", 00:06:28.664 "vfu_virtio_create_fs_endpoint", 00:06:28.664 "vfu_virtio_create_scsi_endpoint", 00:06:28.664 "vfu_virtio_scsi_remove_target", 00:06:28.664 "vfu_virtio_scsi_add_target", 00:06:28.664 "vfu_virtio_create_blk_endpoint", 00:06:28.664 "vfu_virtio_delete_endpoint", 00:06:28.664 "keyring_file_remove_key", 00:06:28.664 "keyring_file_add_key", 00:06:28.664 "keyring_linux_set_options", 00:06:28.664 "fsdev_aio_delete", 00:06:28.664 "fsdev_aio_create", 00:06:28.664 "iscsi_get_histogram", 00:06:28.664 "iscsi_enable_histogram", 00:06:28.664 "iscsi_set_options", 00:06:28.664 "iscsi_get_auth_groups", 00:06:28.664 "iscsi_auth_group_remove_secret", 00:06:28.664 "iscsi_auth_group_add_secret", 00:06:28.664 "iscsi_delete_auth_group", 00:06:28.664 "iscsi_create_auth_group", 00:06:28.664 "iscsi_set_discovery_auth", 00:06:28.664 "iscsi_get_options", 00:06:28.664 "iscsi_target_node_request_logout", 00:06:28.664 "iscsi_target_node_set_redirect", 00:06:28.664 "iscsi_target_node_set_auth", 00:06:28.664 "iscsi_target_node_add_lun", 00:06:28.664 "iscsi_get_stats", 00:06:28.664 "iscsi_get_connections", 00:06:28.664 "iscsi_portal_group_set_auth", 00:06:28.664 "iscsi_start_portal_group", 00:06:28.664 "iscsi_delete_portal_group", 00:06:28.664 "iscsi_create_portal_group", 00:06:28.664 "iscsi_get_portal_groups", 00:06:28.664 "iscsi_delete_target_node", 00:06:28.665 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.665 "iscsi_target_node_add_pg_ig_maps", 00:06:28.665 "iscsi_create_target_node", 00:06:28.665 "iscsi_get_target_nodes", 00:06:28.665 "iscsi_delete_initiator_group", 00:06:28.665 "iscsi_initiator_group_remove_initiators", 00:06:28.665 "iscsi_initiator_group_add_initiators", 00:06:28.665 "iscsi_create_initiator_group", 00:06:28.665 "iscsi_get_initiator_groups", 00:06:28.665 "nvmf_set_crdt", 00:06:28.665 "nvmf_set_config", 00:06:28.665 "nvmf_set_max_subsystems", 00:06:28.665 "nvmf_stop_mdns_prr", 00:06:28.665 "nvmf_publish_mdns_prr", 00:06:28.665 "nvmf_subsystem_get_listeners", 00:06:28.665 "nvmf_subsystem_get_qpairs", 00:06:28.665 "nvmf_subsystem_get_controllers", 00:06:28.665 "nvmf_get_stats", 00:06:28.665 "nvmf_get_transports", 00:06:28.665 "nvmf_create_transport", 00:06:28.665 "nvmf_get_targets", 00:06:28.665 "nvmf_delete_target", 00:06:28.665 "nvmf_create_target", 00:06:28.665 "nvmf_subsystem_allow_any_host", 00:06:28.665 "nvmf_subsystem_set_keys", 00:06:28.665 "nvmf_subsystem_remove_host", 00:06:28.665 "nvmf_subsystem_add_host", 00:06:28.665 "nvmf_ns_remove_host", 00:06:28.665 "nvmf_ns_add_host", 00:06:28.665 "nvmf_subsystem_remove_ns", 00:06:28.665 "nvmf_subsystem_set_ns_ana_group", 00:06:28.665 "nvmf_subsystem_add_ns", 00:06:28.665 "nvmf_subsystem_listener_set_ana_state", 00:06:28.665 "nvmf_discovery_get_referrals", 00:06:28.665 "nvmf_discovery_remove_referral", 00:06:28.665 "nvmf_discovery_add_referral", 00:06:28.665 "nvmf_subsystem_remove_listener", 00:06:28.665 "nvmf_subsystem_add_listener", 00:06:28.665 "nvmf_delete_subsystem", 00:06:28.665 "nvmf_create_subsystem", 00:06:28.665 "nvmf_get_subsystems", 00:06:28.665 "env_dpdk_get_mem_stats", 00:06:28.665 "nbd_get_disks", 00:06:28.665 "nbd_stop_disk", 00:06:28.665 "nbd_start_disk", 00:06:28.665 "ublk_recover_disk", 00:06:28.665 "ublk_get_disks", 00:06:28.665 "ublk_stop_disk", 00:06:28.665 "ublk_start_disk", 00:06:28.665 "ublk_destroy_target", 00:06:28.665 "ublk_create_target", 00:06:28.665 "virtio_blk_create_transport", 00:06:28.665 "virtio_blk_get_transports", 00:06:28.665 "vhost_controller_set_coalescing", 00:06:28.665 "vhost_get_controllers", 00:06:28.665 "vhost_delete_controller", 00:06:28.665 "vhost_create_blk_controller", 00:06:28.665 "vhost_scsi_controller_remove_target", 00:06:28.665 "vhost_scsi_controller_add_target", 00:06:28.665 "vhost_start_scsi_controller", 00:06:28.665 "vhost_create_scsi_controller", 00:06:28.665 "thread_set_cpumask", 00:06:28.665 "scheduler_set_options", 00:06:28.665 "framework_get_governor", 00:06:28.665 "framework_get_scheduler", 00:06:28.665 "framework_set_scheduler", 00:06:28.665 "framework_get_reactors", 00:06:28.665 "thread_get_io_channels", 00:06:28.665 "thread_get_pollers", 00:06:28.665 "thread_get_stats", 00:06:28.665 "framework_monitor_context_switch", 00:06:28.665 "spdk_kill_instance", 00:06:28.665 "log_enable_timestamps", 00:06:28.665 "log_get_flags", 00:06:28.665 "log_clear_flag", 00:06:28.665 "log_set_flag", 00:06:28.665 "log_get_level", 00:06:28.665 "log_set_level", 00:06:28.665 "log_get_print_level", 00:06:28.665 "log_set_print_level", 00:06:28.665 "framework_enable_cpumask_locks", 00:06:28.665 "framework_disable_cpumask_locks", 00:06:28.665 "framework_wait_init", 00:06:28.665 "framework_start_init", 00:06:28.665 "scsi_get_devices", 00:06:28.665 "bdev_get_histogram", 00:06:28.665 "bdev_enable_histogram", 00:06:28.665 "bdev_set_qos_limit", 00:06:28.665 "bdev_set_qd_sampling_period", 00:06:28.665 "bdev_get_bdevs", 00:06:28.665 "bdev_reset_iostat", 00:06:28.665 "bdev_get_iostat", 00:06:28.665 "bdev_examine", 00:06:28.665 "bdev_wait_for_examine", 00:06:28.665 "bdev_set_options", 00:06:28.665 "accel_get_stats", 00:06:28.665 "accel_set_options", 00:06:28.665 "accel_set_driver", 00:06:28.665 "accel_crypto_key_destroy", 00:06:28.665 "accel_crypto_keys_get", 00:06:28.665 "accel_crypto_key_create", 00:06:28.665 "accel_assign_opc", 00:06:28.665 "accel_get_module_info", 00:06:28.665 "accel_get_opc_assignments", 00:06:28.665 "vmd_rescan", 00:06:28.665 "vmd_remove_device", 00:06:28.665 "vmd_enable", 00:06:28.665 "sock_get_default_impl", 00:06:28.665 "sock_set_default_impl", 00:06:28.665 "sock_impl_set_options", 00:06:28.665 "sock_impl_get_options", 00:06:28.665 "iobuf_get_stats", 00:06:28.665 "iobuf_set_options", 00:06:28.665 "keyring_get_keys", 00:06:28.665 "vfu_tgt_set_base_path", 00:06:28.665 "framework_get_pci_devices", 00:06:28.665 "framework_get_config", 00:06:28.665 "framework_get_subsystems", 00:06:28.665 "fsdev_set_opts", 00:06:28.665 "fsdev_get_opts", 00:06:28.665 "trace_get_info", 00:06:28.665 "trace_get_tpoint_group_mask", 00:06:28.665 "trace_disable_tpoint_group", 00:06:28.665 "trace_enable_tpoint_group", 00:06:28.665 "trace_clear_tpoint_mask", 00:06:28.665 "trace_set_tpoint_mask", 00:06:28.665 "notify_get_notifications", 00:06:28.665 "notify_get_types", 00:06:28.665 "spdk_get_version", 00:06:28.665 "rpc_get_methods" 00:06:28.665 ] 00:06:28.665 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.665 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.665 13:17:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104693 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 104693 ']' 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 104693 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104693 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104693' 00:06:28.665 killing process with pid 104693 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 104693 00:06:28.665 13:17:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 104693 00:06:28.924 00:06:28.924 real 0m1.271s 00:06:28.924 user 0m2.301s 00:06:28.924 sys 0m0.473s 00:06:28.924 13:17:20 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.924 13:17:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.925 ************************************ 00:06:28.925 END TEST spdkcli_tcp 00:06:28.925 ************************************ 00:06:29.185 13:17:20 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.185 13:17:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.185 13:17:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.185 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 ************************************ 00:06:29.185 START TEST dpdk_mem_utility 00:06:29.185 ************************************ 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.185 * Looking for test storage... 00:06:29.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.185 13:17:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.185 --rc genhtml_branch_coverage=1 00:06:29.185 --rc genhtml_function_coverage=1 00:06:29.185 --rc genhtml_legend=1 00:06:29.185 --rc geninfo_all_blocks=1 00:06:29.185 --rc geninfo_unexecuted_blocks=1 00:06:29.185 00:06:29.185 ' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.185 --rc genhtml_branch_coverage=1 00:06:29.185 --rc genhtml_function_coverage=1 00:06:29.185 --rc genhtml_legend=1 00:06:29.185 --rc geninfo_all_blocks=1 00:06:29.185 --rc geninfo_unexecuted_blocks=1 00:06:29.185 00:06:29.185 ' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.185 --rc genhtml_branch_coverage=1 00:06:29.185 --rc genhtml_function_coverage=1 00:06:29.185 --rc genhtml_legend=1 00:06:29.185 --rc geninfo_all_blocks=1 00:06:29.185 --rc geninfo_unexecuted_blocks=1 00:06:29.185 00:06:29.185 ' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.185 --rc genhtml_branch_coverage=1 00:06:29.185 --rc genhtml_function_coverage=1 00:06:29.185 --rc genhtml_legend=1 00:06:29.185 --rc geninfo_all_blocks=1 00:06:29.185 --rc geninfo_unexecuted_blocks=1 00:06:29.185 00:06:29.185 ' 00:06:29.185 13:17:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.185 13:17:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104907 00:06:29.185 13:17:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.185 13:17:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104907 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 104907 ']' 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.185 13:17:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 [2024-10-14 13:17:21.014536] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:29.185 [2024-10-14 13:17:21.014631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104907 ] 00:06:29.446 [2024-10-14 13:17:21.073672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.446 [2024-10-14 13:17:21.119484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.708 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.708 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:29.708 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:29.708 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:29.708 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.708 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.708 { 00:06:29.708 "filename": "/tmp/spdk_mem_dump.txt" 00:06:29.708 } 00:06:29.708 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.708 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.708 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:29.708 1 heaps totaling size 810.000000 MiB 00:06:29.708 size: 810.000000 MiB heap id: 0 00:06:29.708 end heaps---------- 00:06:29.708 9 mempools totaling size 595.772034 MiB 00:06:29.708 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:29.708 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:29.708 size: 92.545471 MiB name: bdev_io_104907 00:06:29.708 size: 50.003479 MiB name: msgpool_104907 00:06:29.708 size: 36.509338 MiB name: fsdev_io_104907 00:06:29.708 size: 21.763794 MiB name: PDU_Pool 00:06:29.708 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:29.708 size: 4.133484 MiB name: evtpool_104907 00:06:29.708 size: 0.026123 MiB name: Session_Pool 00:06:29.708 end mempools------- 00:06:29.708 6 memzones totaling size 4.142822 MiB 00:06:29.708 size: 1.000366 MiB name: RG_ring_0_104907 00:06:29.708 size: 1.000366 MiB name: RG_ring_1_104907 00:06:29.708 size: 1.000366 MiB name: RG_ring_4_104907 00:06:29.708 size: 1.000366 MiB name: RG_ring_5_104907 00:06:29.708 size: 0.125366 MiB name: RG_ring_2_104907 00:06:29.708 size: 0.015991 MiB name: RG_ring_3_104907 00:06:29.708 end memzones------- 00:06:29.708 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.708 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:29.708 list of free elements. size: 10.862488 MiB 00:06:29.708 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:29.708 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:29.708 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:29.708 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:29.708 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:29.708 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:29.708 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:29.708 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:29.708 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:29.708 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:29.708 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:29.708 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:29.708 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:29.708 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:29.708 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:29.708 list of standard malloc elements. size: 199.218628 MiB 00:06:29.708 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:29.708 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:29.708 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:29.708 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:29.708 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.708 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.708 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:29.708 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.708 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:29.708 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:29.708 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:29.708 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:29.708 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:29.708 list of memzone associated elements. size: 599.918884 MiB 00:06:29.708 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:29.708 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.708 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:29.708 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.708 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:29.708 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104907_0 00:06:29.709 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:29.709 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104907_0 00:06:29.709 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:29.709 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104907_0 00:06:29.709 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:29.709 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.709 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:29.709 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.709 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:29.709 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104907_0 00:06:29.709 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:29.709 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104907 00:06:29.709 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.709 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104907 00:06:29.709 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:29.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.709 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:29.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.709 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:29.709 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.709 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:29.709 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.709 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:29.709 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104907 00:06:29.709 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:29.709 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104907 00:06:29.709 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:29.709 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104907 00:06:29.709 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:29.709 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104907 00:06:29.709 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:29.709 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104907 00:06:29.709 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:29.709 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104907 00:06:29.709 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:29.709 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.709 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:29.709 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.709 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:29.709 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.709 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:29.709 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104907 00:06:29.709 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:29.709 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104907 00:06:29.709 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:29.709 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.709 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:29.709 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.709 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:29.709 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104907 00:06:29.709 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:29.709 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.709 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:29.709 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104907 00:06:29.709 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:29.709 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104907 00:06:29.709 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:29.709 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104907 00:06:29.709 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:29.709 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.709 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.709 13:17:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104907 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 104907 ']' 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 104907 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104907 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104907' 00:06:29.709 killing process with pid 104907 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 104907 00:06:29.709 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 104907 00:06:30.278 00:06:30.278 real 0m1.074s 00:06:30.278 user 0m1.037s 00:06:30.278 sys 0m0.419s 00:06:30.278 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.278 13:17:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.278 ************************************ 00:06:30.278 END TEST dpdk_mem_utility 00:06:30.278 ************************************ 00:06:30.278 13:17:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.278 13:17:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.278 13:17:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.278 13:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:30.278 ************************************ 00:06:30.278 START TEST event 00:06:30.278 ************************************ 00:06:30.278 13:17:21 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.278 * Looking for test storage... 00:06:30.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:30.278 13:17:21 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.278 13:17:21 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.278 13:17:21 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.278 13:17:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.278 13:17:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.278 13:17:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.278 13:17:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.278 13:17:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.278 13:17:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.278 13:17:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.278 13:17:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.278 13:17:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.278 13:17:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.278 13:17:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.278 13:17:22 event -- scripts/common.sh@344 -- # case "$op" in 00:06:30.278 13:17:22 event -- scripts/common.sh@345 -- # : 1 00:06:30.278 13:17:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.278 13:17:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.278 13:17:22 event -- scripts/common.sh@365 -- # decimal 1 00:06:30.278 13:17:22 event -- scripts/common.sh@353 -- # local d=1 00:06:30.278 13:17:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.278 13:17:22 event -- scripts/common.sh@355 -- # echo 1 00:06:30.278 13:17:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.278 13:17:22 event -- scripts/common.sh@366 -- # decimal 2 00:06:30.278 13:17:22 event -- scripts/common.sh@353 -- # local d=2 00:06:30.278 13:17:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.278 13:17:22 event -- scripts/common.sh@355 -- # echo 2 00:06:30.278 13:17:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.278 13:17:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.278 13:17:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.278 13:17:22 event -- scripts/common.sh@368 -- # return 0 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.278 --rc genhtml_branch_coverage=1 00:06:30.278 --rc genhtml_function_coverage=1 00:06:30.278 --rc genhtml_legend=1 00:06:30.278 --rc geninfo_all_blocks=1 00:06:30.278 --rc geninfo_unexecuted_blocks=1 00:06:30.278 00:06:30.278 ' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.278 --rc genhtml_branch_coverage=1 00:06:30.278 --rc genhtml_function_coverage=1 00:06:30.278 --rc genhtml_legend=1 00:06:30.278 --rc geninfo_all_blocks=1 00:06:30.278 --rc geninfo_unexecuted_blocks=1 00:06:30.278 00:06:30.278 ' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.278 --rc genhtml_branch_coverage=1 00:06:30.278 --rc genhtml_function_coverage=1 00:06:30.278 --rc genhtml_legend=1 00:06:30.278 --rc geninfo_all_blocks=1 00:06:30.278 --rc geninfo_unexecuted_blocks=1 00:06:30.278 00:06:30.278 ' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.278 --rc genhtml_branch_coverage=1 00:06:30.278 --rc genhtml_function_coverage=1 00:06:30.278 --rc genhtml_legend=1 00:06:30.278 --rc geninfo_all_blocks=1 00:06:30.278 --rc geninfo_unexecuted_blocks=1 00:06:30.278 00:06:30.278 ' 00:06:30.278 13:17:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.278 13:17:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.278 13:17:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:30.278 13:17:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.278 13:17:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.278 ************************************ 00:06:30.278 START TEST event_perf 00:06:30.278 ************************************ 00:06:30.278 13:17:22 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.278 Running I/O for 1 seconds...[2024-10-14 13:17:22.130446] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:30.278 [2024-10-14 13:17:22.130515] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105105 ] 00:06:30.538 [2024-10-14 13:17:22.187050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.538 [2024-10-14 13:17:22.235234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.538 [2024-10-14 13:17:22.235291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.538 [2024-10-14 13:17:22.235357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.538 [2024-10-14 13:17:22.235360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.478 Running I/O for 1 seconds... 00:06:31.478 lcore 0: 228134 00:06:31.478 lcore 1: 228134 00:06:31.478 lcore 2: 228135 00:06:31.478 lcore 3: 228134 00:06:31.478 done. 00:06:31.478 00:06:31.478 real 0m1.165s 00:06:31.478 user 0m4.095s 00:06:31.478 sys 0m0.065s 00:06:31.478 13:17:23 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.478 13:17:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.478 ************************************ 00:06:31.478 END TEST event_perf 00:06:31.478 ************************************ 00:06:31.478 13:17:23 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.478 13:17:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:31.478 13:17:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.478 13:17:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.478 ************************************ 00:06:31.478 START TEST event_reactor 00:06:31.478 ************************************ 00:06:31.478 13:17:23 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.737 [2024-10-14 13:17:23.338501] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:31.737 [2024-10-14 13:17:23.338564] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105260 ] 00:06:31.737 [2024-10-14 13:17:23.392437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.737 [2024-10-14 13:17:23.436948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.678 test_start 00:06:32.678 oneshot 00:06:32.678 tick 100 00:06:32.678 tick 100 00:06:32.678 tick 250 00:06:32.678 tick 100 00:06:32.678 tick 100 00:06:32.678 tick 100 00:06:32.678 tick 250 00:06:32.678 tick 500 00:06:32.678 tick 100 00:06:32.678 tick 100 00:06:32.678 tick 250 00:06:32.678 tick 100 00:06:32.678 tick 100 00:06:32.678 test_end 00:06:32.678 00:06:32.678 real 0m1.153s 00:06:32.678 user 0m1.092s 00:06:32.678 sys 0m0.057s 00:06:32.678 13:17:24 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.678 13:17:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.678 ************************************ 00:06:32.678 END TEST event_reactor 00:06:32.678 ************************************ 00:06:32.678 13:17:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.678 13:17:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.678 13:17:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.678 13:17:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.678 ************************************ 00:06:32.678 START TEST event_reactor_perf 00:06:32.678 ************************************ 00:06:32.678 13:17:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.937 [2024-10-14 13:17:24.543808] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:32.937 [2024-10-14 13:17:24.543873] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105412 ] 00:06:32.937 [2024-10-14 13:17:24.600619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.937 [2024-10-14 13:17:24.645575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.880 test_start 00:06:33.880 test_end 00:06:33.880 Performance: 445506 events per second 00:06:33.880 00:06:33.880 real 0m1.159s 00:06:33.880 user 0m1.087s 00:06:33.880 sys 0m0.068s 00:06:33.880 13:17:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.880 13:17:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.880 ************************************ 00:06:33.880 END TEST event_reactor_perf 00:06:33.880 ************************************ 00:06:33.880 13:17:25 event -- event/event.sh@49 -- # uname -s 00:06:33.880 13:17:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.880 13:17:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.880 13:17:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.880 13:17:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.880 13:17:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.140 ************************************ 00:06:34.140 START TEST event_scheduler 00:06:34.140 ************************************ 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.140 * Looking for test storage... 00:06:34.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.140 13:17:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:34.140 13:17:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.141 --rc genhtml_branch_coverage=1 00:06:34.141 --rc genhtml_function_coverage=1 00:06:34.141 --rc genhtml_legend=1 00:06:34.141 --rc geninfo_all_blocks=1 00:06:34.141 --rc geninfo_unexecuted_blocks=1 00:06:34.141 00:06:34.141 ' 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.141 --rc genhtml_branch_coverage=1 00:06:34.141 --rc genhtml_function_coverage=1 00:06:34.141 --rc genhtml_legend=1 00:06:34.141 --rc geninfo_all_blocks=1 00:06:34.141 --rc geninfo_unexecuted_blocks=1 00:06:34.141 00:06:34.141 ' 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.141 --rc genhtml_branch_coverage=1 00:06:34.141 --rc genhtml_function_coverage=1 00:06:34.141 --rc genhtml_legend=1 00:06:34.141 --rc geninfo_all_blocks=1 00:06:34.141 --rc geninfo_unexecuted_blocks=1 00:06:34.141 00:06:34.141 ' 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.141 --rc genhtml_branch_coverage=1 00:06:34.141 --rc genhtml_function_coverage=1 00:06:34.141 --rc genhtml_legend=1 00:06:34.141 --rc geninfo_all_blocks=1 00:06:34.141 --rc geninfo_unexecuted_blocks=1 00:06:34.141 00:06:34.141 ' 00:06:34.141 13:17:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.141 13:17:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105612 00:06:34.141 13:17:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.141 13:17:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.141 13:17:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105612 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 105612 ']' 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.141 13:17:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.141 [2024-10-14 13:17:25.931873] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:34.141 [2024-10-14 13:17:25.931954] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105612 ] 00:06:34.141 [2024-10-14 13:17:25.993319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.401 [2024-10-14 13:17:26.044426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.401 [2024-10-14 13:17:26.044481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.401 [2024-10-14 13:17:26.044548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.401 [2024-10-14 13:17:26.044551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:34.401 13:17:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.401 [2024-10-14 13:17:26.177594] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.401 [2024-10-14 13:17:26.177621] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.401 [2024-10-14 13:17:26.177654] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.401 [2024-10-14 13:17:26.177665] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.401 [2024-10-14 13:17:26.177675] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.401 13:17:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.401 13:17:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.663 [2024-10-14 13:17:26.274050] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.663 13:17:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.663 13:17:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.663 13:17:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.663 13:17:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.663 13:17:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.663 ************************************ 00:06:34.663 START TEST scheduler_create_thread 00:06:34.663 ************************************ 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.663 2 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.663 3 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.663 4 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.663 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 5 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 6 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 7 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 8 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 9 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 10 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.664 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.235 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.235 00:06:35.235 real 0m0.590s 00:06:35.235 user 0m0.011s 00:06:35.235 sys 0m0.002s 00:06:35.235 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.235 13:17:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.235 ************************************ 00:06:35.235 END TEST scheduler_create_thread 00:06:35.235 ************************************ 00:06:35.235 13:17:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:35.235 13:17:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105612 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 105612 ']' 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 105612 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105612 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105612' 00:06:35.235 killing process with pid 105612 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 105612 00:06:35.235 13:17:26 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 105612 00:06:35.805 [2024-10-14 13:17:27.374190] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.805 00:06:35.805 real 0m1.814s 00:06:35.805 user 0m2.544s 00:06:35.805 sys 0m0.347s 00:06:35.805 13:17:27 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.805 13:17:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 ************************************ 00:06:35.805 END TEST event_scheduler 00:06:35.805 ************************************ 00:06:35.805 13:17:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.805 13:17:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.805 13:17:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.805 13:17:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.805 13:17:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 ************************************ 00:06:35.805 START TEST app_repeat 00:06:35.805 ************************************ 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105916 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105916' 00:06:35.805 Process app_repeat pid: 105916 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.805 spdk_app_start Round 0 00:06:35.805 13:17:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105916 /var/tmp/spdk-nbd.sock 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105916 ']' 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.805 13:17:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 [2024-10-14 13:17:27.635546] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:35.805 [2024-10-14 13:17:27.635611] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105916 ] 00:06:36.064 [2024-10-14 13:17:27.694215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.064 [2024-10-14 13:17:27.737649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.064 [2024-10-14 13:17:27.737653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.064 13:17:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.064 13:17:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.064 13:17:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.322 Malloc0 00:06:36.322 13:17:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.582 Malloc1 00:06:36.842 13:17:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.842 13:17:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.101 /dev/nbd0 00:06:37.101 13:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.101 13:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.101 1+0 records in 00:06:37.101 1+0 records out 00:06:37.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159754 s, 25.6 MB/s 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.101 13:17:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:37.101 13:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.101 13:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.101 13:17:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.360 /dev/nbd1 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.360 1+0 records in 00:06:37.360 1+0 records out 00:06:37.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199837 s, 20.5 MB/s 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.360 13:17:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.360 13:17:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.619 { 00:06:37.619 "nbd_device": "/dev/nbd0", 00:06:37.619 "bdev_name": "Malloc0" 00:06:37.619 }, 00:06:37.619 { 00:06:37.619 "nbd_device": "/dev/nbd1", 00:06:37.619 "bdev_name": "Malloc1" 00:06:37.619 } 00:06:37.619 ]' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.619 { 00:06:37.619 "nbd_device": "/dev/nbd0", 00:06:37.619 "bdev_name": "Malloc0" 00:06:37.619 }, 00:06:37.619 { 00:06:37.619 "nbd_device": "/dev/nbd1", 00:06:37.619 "bdev_name": "Malloc1" 00:06:37.619 } 00:06:37.619 ]' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.619 /dev/nbd1' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.619 /dev/nbd1' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.619 256+0 records in 00:06:37.619 256+0 records out 00:06:37.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385093 s, 272 MB/s 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.619 256+0 records in 00:06:37.619 256+0 records out 00:06:37.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193755 s, 54.1 MB/s 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.619 256+0 records in 00:06:37.619 256+0 records out 00:06:37.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220075 s, 47.6 MB/s 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.619 13:17:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.879 13:17:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.138 13:17:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.397 13:17:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.655 13:17:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.655 13:17:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.656 13:17:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.656 13:17:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.914 13:17:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.174 [2024-10-14 13:17:30.867310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.174 [2024-10-14 13:17:30.910650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.174 [2024-10-14 13:17:30.910654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.174 [2024-10-14 13:17:30.967639] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.174 [2024-10-14 13:17:30.967712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.460 13:17:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.460 13:17:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.460 spdk_app_start Round 1 00:06:42.460 13:17:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105916 /var/tmp/spdk-nbd.sock 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105916 ']' 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.460 13:17:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:42.460 13:17:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.460 Malloc0 00:06:42.460 13:17:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.719 Malloc1 00:06:42.719 13:17:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.719 13:17:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.978 /dev/nbd0 00:06:42.978 13:17:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.978 13:17:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.978 13:17:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.979 13:17:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.237 1+0 records in 00:06:43.237 1+0 records out 00:06:43.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257989 s, 15.9 MB/s 00:06:43.237 13:17:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.237 13:17:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.237 13:17:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.237 13:17:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.237 13:17:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.237 13:17:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.237 13:17:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.237 13:17:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.495 /dev/nbd1 00:06:43.495 13:17:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.495 13:17:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.495 13:17:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:43.495 13:17:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:43.495 13:17:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:43.495 13:17:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.496 1+0 records in 00:06:43.496 1+0 records out 00:06:43.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292414 s, 14.0 MB/s 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:43.496 13:17:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:43.496 13:17:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.496 13:17:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.496 13:17:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.496 13:17:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.496 13:17:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.754 { 00:06:43.754 "nbd_device": "/dev/nbd0", 00:06:43.754 "bdev_name": "Malloc0" 00:06:43.754 }, 00:06:43.754 { 00:06:43.754 "nbd_device": "/dev/nbd1", 00:06:43.754 "bdev_name": "Malloc1" 00:06:43.754 } 00:06:43.754 ]' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.754 { 00:06:43.754 "nbd_device": "/dev/nbd0", 00:06:43.754 "bdev_name": "Malloc0" 00:06:43.754 }, 00:06:43.754 { 00:06:43.754 "nbd_device": "/dev/nbd1", 00:06:43.754 "bdev_name": "Malloc1" 00:06:43.754 } 00:06:43.754 ]' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.754 /dev/nbd1' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.754 /dev/nbd1' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.754 13:17:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.755 256+0 records in 00:06:43.755 256+0 records out 00:06:43.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502616 s, 209 MB/s 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.755 256+0 records in 00:06:43.755 256+0 records out 00:06:43.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02016 s, 52.0 MB/s 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.755 256+0 records in 00:06:43.755 256+0 records out 00:06:43.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222045 s, 47.2 MB/s 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.755 13:17:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.014 13:17:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.274 13:17:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.533 13:17:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.791 13:17:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.791 13:17:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.052 13:17:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.312 [2024-10-14 13:17:36.936313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.312 [2024-10-14 13:17:36.978715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.312 [2024-10-14 13:17:36.978715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.312 [2024-10-14 13:17:37.036754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.312 [2024-10-14 13:17:37.036826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.604 13:17:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.604 13:17:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.604 spdk_app_start Round 2 00:06:48.604 13:17:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105916 /var/tmp/spdk-nbd.sock 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105916 ']' 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.604 13:17:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.604 13:17:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.604 13:17:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:48.604 13:17:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.604 Malloc0 00:06:48.604 13:17:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.862 Malloc1 00:06:48.862 13:17:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.862 13:17:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.863 13:17:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.121 /dev/nbd0 00:06:49.121 13:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.121 13:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.121 1+0 records in 00:06:49.121 1+0 records out 00:06:49.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016947 s, 24.2 MB/s 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.121 13:17:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.121 13:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.121 13:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.121 13:17:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.689 /dev/nbd1 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.689 1+0 records in 00:06:49.689 1+0 records out 00:06:49.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023037 s, 17.8 MB/s 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.689 13:17:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.689 { 00:06:49.689 "nbd_device": "/dev/nbd0", 00:06:49.689 "bdev_name": "Malloc0" 00:06:49.689 }, 00:06:49.689 { 00:06:49.689 "nbd_device": "/dev/nbd1", 00:06:49.689 "bdev_name": "Malloc1" 00:06:49.689 } 00:06:49.689 ]' 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.689 { 00:06:49.689 "nbd_device": "/dev/nbd0", 00:06:49.689 "bdev_name": "Malloc0" 00:06:49.689 }, 00:06:49.689 { 00:06:49.689 "nbd_device": "/dev/nbd1", 00:06:49.689 "bdev_name": "Malloc1" 00:06:49.689 } 00:06:49.689 ]' 00:06:49.689 13:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.949 /dev/nbd1' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.949 /dev/nbd1' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.949 256+0 records in 00:06:49.949 256+0 records out 00:06:49.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521905 s, 201 MB/s 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.949 256+0 records in 00:06:49.949 256+0 records out 00:06:49.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199601 s, 52.5 MB/s 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.949 256+0 records in 00:06:49.949 256+0 records out 00:06:49.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219199 s, 47.8 MB/s 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.949 13:17:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.208 13:17:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.466 13:17:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.724 13:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.725 13:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.725 13:17:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.725 13:17:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.725 13:17:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.725 13:17:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.725 13:17:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.992 13:17:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.251 [2024-10-14 13:17:43.011456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.251 [2024-10-14 13:17:43.054213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.251 [2024-10-14 13:17:43.054217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.509 [2024-10-14 13:17:43.112161] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.509 [2024-10-14 13:17:43.112224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.040 13:17:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105916 /var/tmp/spdk-nbd.sock 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 105916 ']' 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.040 13:17:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:54.298 13:17:46 event.app_repeat -- event/event.sh@39 -- # killprocess 105916 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 105916 ']' 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 105916 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105916 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105916' 00:06:54.298 killing process with pid 105916 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@969 -- # kill 105916 00:06:54.298 13:17:46 event.app_repeat -- common/autotest_common.sh@974 -- # wait 105916 00:06:54.557 spdk_app_start is called in Round 0. 00:06:54.557 Shutdown signal received, stop current app iteration 00:06:54.557 Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 reinitialization... 00:06:54.557 spdk_app_start is called in Round 1. 00:06:54.557 Shutdown signal received, stop current app iteration 00:06:54.557 Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 reinitialization... 00:06:54.557 spdk_app_start is called in Round 2. 00:06:54.557 Shutdown signal received, stop current app iteration 00:06:54.557 Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 reinitialization... 00:06:54.557 spdk_app_start is called in Round 3. 00:06:54.557 Shutdown signal received, stop current app iteration 00:06:54.557 13:17:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.557 13:17:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.557 00:06:54.557 real 0m18.696s 00:06:54.557 user 0m41.403s 00:06:54.557 sys 0m3.352s 00:06:54.557 13:17:46 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.557 13:17:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.557 ************************************ 00:06:54.557 END TEST app_repeat 00:06:54.557 ************************************ 00:06:54.557 13:17:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.557 13:17:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.557 13:17:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.557 13:17:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.557 13:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.557 ************************************ 00:06:54.557 START TEST cpu_locks 00:06:54.558 ************************************ 00:06:54.558 13:17:46 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.558 * Looking for test storage... 00:06:54.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.817 13:17:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.817 --rc genhtml_branch_coverage=1 00:06:54.817 --rc genhtml_function_coverage=1 00:06:54.817 --rc genhtml_legend=1 00:06:54.817 --rc geninfo_all_blocks=1 00:06:54.817 --rc geninfo_unexecuted_blocks=1 00:06:54.817 00:06:54.817 ' 00:06:54.817 13:17:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.817 13:17:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.817 13:17:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.817 13:17:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.817 13:17:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.817 ************************************ 00:06:54.817 START TEST default_locks 00:06:54.817 ************************************ 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=108295 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 108295 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 108295 ']' 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.817 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.817 [2024-10-14 13:17:46.586480] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:54.817 [2024-10-14 13:17:46.586594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108295 ] 00:06:54.817 [2024-10-14 13:17:46.648984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.076 [2024-10-14 13:17:46.698273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.334 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.334 13:17:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:55.334 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 108295 00:06:55.334 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 108295 00:06:55.334 13:17:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.596 lslocks: write error 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 108295 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 108295 ']' 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 108295 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108295 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108295' 00:06:55.596 killing process with pid 108295 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 108295 00:06:55.596 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 108295 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 108295 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 108295 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 108295 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 108295 ']' 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (108295) - No such process 00:06:55.856 ERROR: process (pid: 108295) is no longer running 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.856 00:06:55.856 real 0m1.106s 00:06:55.856 user 0m1.067s 00:06:55.856 sys 0m0.510s 00:06:55.856 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.857 13:17:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.857 ************************************ 00:06:55.857 END TEST default_locks 00:06:55.857 ************************************ 00:06:55.857 13:17:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.857 13:17:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.857 13:17:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.857 13:17:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.857 ************************************ 00:06:55.857 START TEST default_locks_via_rpc 00:06:55.857 ************************************ 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108472 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108472 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 108472 ']' 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.857 13:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.118 [2024-10-14 13:17:47.744837] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:56.118 [2024-10-14 13:17:47.744931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108472 ] 00:06:56.118 [2024-10-14 13:17:47.802819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.118 [2024-10-14 13:17:47.847742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108472 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108472 00:06:56.378 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108472 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 108472 ']' 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 108472 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108472 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108472' 00:06:56.639 killing process with pid 108472 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 108472 00:06:56.639 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 108472 00:06:56.899 00:06:56.899 real 0m1.040s 00:06:56.899 user 0m1.013s 00:06:56.899 sys 0m0.490s 00:06:56.899 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.900 13:17:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.900 ************************************ 00:06:56.900 END TEST default_locks_via_rpc 00:06:56.900 ************************************ 00:06:56.900 13:17:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:56.900 13:17:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.900 13:17:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.900 13:17:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.159 ************************************ 00:06:57.159 START TEST non_locking_app_on_locked_coremask 00:06:57.159 ************************************ 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108619 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108619 /var/tmp/spdk.sock 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108619 ']' 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.159 13:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.159 [2024-10-14 13:17:48.839572] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:57.159 [2024-10-14 13:17:48.839659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108619 ] 00:06:57.159 [2024-10-14 13:17:48.899208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.159 [2024-10-14 13:17:48.945584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.417 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.417 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108742 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108742 /var/tmp/spdk2.sock 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 108742 ']' 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.418 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.418 [2024-10-14 13:17:49.241403] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:57.418 [2024-10-14 13:17:49.241515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108742 ] 00:06:57.678 [2024-10-14 13:17:49.323779] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.678 [2024-10-14 13:17:49.323805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.678 [2024-10-14 13:17:49.412223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.248 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.248 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.248 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108619 00:06:58.248 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108619 00:06:58.248 13:17:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.507 lslocks: write error 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108619 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108619 ']' 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 108619 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108619 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108619' 00:06:58.507 killing process with pid 108619 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 108619 00:06:58.507 13:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 108619 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108742 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 108742 ']' 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 108742 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 108742 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 108742' 00:06:59.449 killing process with pid 108742 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 108742 00:06:59.449 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 108742 00:06:59.709 00:06:59.709 real 0m2.721s 00:06:59.709 user 0m2.725s 00:06:59.709 sys 0m0.993s 00:06:59.709 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.709 13:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.709 ************************************ 00:06:59.709 END TEST non_locking_app_on_locked_coremask 00:06:59.709 ************************************ 00:06:59.709 13:17:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.709 13:17:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.709 13:17:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.709 13:17:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.709 ************************************ 00:06:59.709 START TEST locking_app_on_unlocked_coremask 00:06:59.709 ************************************ 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=109045 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 109045 /var/tmp/spdk.sock 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109045 ']' 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.709 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.970 [2024-10-14 13:17:51.610913] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:06:59.970 [2024-10-14 13:17:51.610978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109045 ] 00:06:59.970 [2024-10-14 13:17:51.670359] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.970 [2024-10-14 13:17:51.670404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.970 [2024-10-14 13:17:51.719753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=109051 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 109051 /var/tmp/spdk2.sock 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109051 ']' 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.230 13:17:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.230 [2024-10-14 13:17:52.041920] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:00.230 [2024-10-14 13:17:52.042015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109051 ] 00:07:00.491 [2024-10-14 13:17:52.133472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.491 [2024-10-14 13:17:52.225382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.062 13:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.062 13:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.062 13:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 109051 00:07:01.062 13:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109051 00:07:01.062 13:17:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.631 lslocks: write error 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 109045 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109045 ']' 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 109045 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109045 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109045' 00:07:01.631 killing process with pid 109045 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 109045 00:07:01.631 13:17:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 109045 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 109051 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109051 ']' 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 109051 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.201 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109051 00:07:02.461 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.461 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.461 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109051' 00:07:02.461 killing process with pid 109051 00:07:02.461 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 109051 00:07:02.461 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 109051 00:07:02.722 00:07:02.722 real 0m2.882s 00:07:02.722 user 0m2.908s 00:07:02.722 sys 0m1.029s 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.722 ************************************ 00:07:02.722 END TEST locking_app_on_unlocked_coremask 00:07:02.722 ************************************ 00:07:02.722 13:17:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.722 13:17:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.722 13:17:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.722 13:17:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.722 ************************************ 00:07:02.722 START TEST locking_app_on_locked_coremask 00:07:02.722 ************************************ 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109478 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109478 /var/tmp/spdk.sock 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109478 ']' 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.722 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.722 [2024-10-14 13:17:54.541730] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:02.722 [2024-10-14 13:17:54.541831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109478 ] 00:07:02.983 [2024-10-14 13:17:54.601213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.983 [2024-10-14 13:17:54.651405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109483 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109483 /var/tmp/spdk2.sock 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 109483 /var/tmp/spdk2.sock 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 109483 /var/tmp/spdk2.sock 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 109483 ']' 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.242 13:17:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.242 [2024-10-14 13:17:54.959308] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:03.242 [2024-10-14 13:17:54.959396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109483 ] 00:07:03.242 [2024-10-14 13:17:55.038699] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109478 has claimed it. 00:07:03.242 [2024-10-14 13:17:55.038751] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (109483) - No such process 00:07:03.814 ERROR: process (pid: 109483) is no longer running 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.814 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.072 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109478 00:07:04.072 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109478 00:07:04.072 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.332 lslocks: write error 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109478 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 109478 ']' 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 109478 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109478 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109478' 00:07:04.332 killing process with pid 109478 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 109478 00:07:04.332 13:17:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 109478 00:07:04.591 00:07:04.591 real 0m1.888s 00:07:04.591 user 0m2.106s 00:07:04.591 sys 0m0.620s 00:07:04.591 13:17:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.591 13:17:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.591 ************************************ 00:07:04.591 END TEST locking_app_on_locked_coremask 00:07:04.591 ************************************ 00:07:04.591 13:17:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.591 13:17:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.591 13:17:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.591 13:17:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.591 ************************************ 00:07:04.592 START TEST locking_overlapped_coremask 00:07:04.592 ************************************ 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109651 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109651 /var/tmp/spdk.sock 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 109651 ']' 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.592 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.851 [2024-10-14 13:17:56.480793] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:04.851 [2024-10-14 13:17:56.480884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109651 ] 00:07:04.851 [2024-10-14 13:17:56.538090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.851 [2024-10-14 13:17:56.582797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.851 [2024-10-14 13:17:56.582941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.851 [2024-10-14 13:17:56.582944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109781 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109781 /var/tmp/spdk2.sock 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 109781 /var/tmp/spdk2.sock 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 109781 /var/tmp/spdk2.sock 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 109781 ']' 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.111 13:17:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.111 [2024-10-14 13:17:56.912274] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:05.111 [2024-10-14 13:17:56.912359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109781 ] 00:07:05.370 [2024-10-14 13:17:57.002060] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109651 has claimed it. 00:07:05.370 [2024-10-14 13:17:57.002137] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (109781) - No such process 00:07:05.940 ERROR: process (pid: 109781) is no longer running 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109651 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 109651 ']' 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 109651 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109651 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109651' 00:07:05.940 killing process with pid 109651 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 109651 00:07:05.940 13:17:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 109651 00:07:06.199 00:07:06.199 real 0m1.613s 00:07:06.199 user 0m4.566s 00:07:06.200 sys 0m0.465s 00:07:06.200 13:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.200 13:17:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.200 ************************************ 00:07:06.200 END TEST locking_overlapped_coremask 00:07:06.200 ************************************ 00:07:06.460 13:17:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:06.460 13:17:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.460 13:17:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.460 13:17:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.460 ************************************ 00:07:06.460 START TEST locking_overlapped_coremask_via_rpc 00:07:06.460 ************************************ 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109945 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109945 /var/tmp/spdk.sock 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109945 ']' 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.460 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.460 [2024-10-14 13:17:58.149638] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:06.460 [2024-10-14 13:17:58.149741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109945 ] 00:07:06.460 [2024-10-14 13:17:58.213002] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.460 [2024-10-14 13:17:58.213045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.460 [2024-10-14 13:17:58.265153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.460 [2024-10-14 13:17:58.265217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.460 [2024-10-14 13:17:58.265220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109957 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109957 /var/tmp/spdk2.sock 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109957 ']' 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.720 13:17:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.978 [2024-10-14 13:17:58.587323] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:06.978 [2024-10-14 13:17:58.587406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109957 ] 00:07:06.978 [2024-10-14 13:17:58.676522] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.978 [2024-10-14 13:17:58.676554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.978 [2024-10-14 13:17:58.772913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.978 [2024-10-14 13:17:58.772981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:06.978 [2024-10-14 13:17:58.772983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.548 [2024-10-14 13:17:59.298229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109945 has claimed it. 00:07:07.548 request: 00:07:07.548 { 00:07:07.548 "method": "framework_enable_cpumask_locks", 00:07:07.548 "req_id": 1 00:07:07.548 } 00:07:07.548 Got JSON-RPC error response 00:07:07.548 response: 00:07:07.548 { 00:07:07.548 "code": -32603, 00:07:07.548 "message": "Failed to claim CPU core: 2" 00:07:07.548 } 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109945 /var/tmp/spdk.sock 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109945 ']' 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.548 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109957 /var/tmp/spdk2.sock 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 109957 ']' 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.805 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.064 00:07:08.064 real 0m1.744s 00:07:08.064 user 0m0.913s 00:07:08.064 sys 0m0.135s 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.064 13:17:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.064 ************************************ 00:07:08.064 END TEST locking_overlapped_coremask_via_rpc 00:07:08.064 ************************************ 00:07:08.064 13:17:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.064 13:17:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109945 ]] 00:07:08.064 13:17:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109945 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109945 ']' 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109945 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109945 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109945' 00:07:08.064 killing process with pid 109945 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 109945 00:07:08.064 13:17:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 109945 00:07:08.629 13:18:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109957 ]] 00:07:08.630 13:18:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109957 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109957 ']' 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109957 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109957 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109957' 00:07:08.630 killing process with pid 109957 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 109957 00:07:08.630 13:18:00 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 109957 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109945 ]] 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109945 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109945 ']' 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109945 00:07:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (109945) - No such process 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 109945 is not found' 00:07:08.889 Process with pid 109945 is not found 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109957 ]] 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109957 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 109957 ']' 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 109957 00:07:08.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (109957) - No such process 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 109957 is not found' 00:07:08.889 Process with pid 109957 is not found 00:07:08.889 13:18:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.889 00:07:08.889 real 0m14.339s 00:07:08.889 user 0m25.245s 00:07:08.889 sys 0m5.205s 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.889 13:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.889 ************************************ 00:07:08.889 END TEST cpu_locks 00:07:08.889 ************************************ 00:07:08.889 00:07:08.889 real 0m38.780s 00:07:08.889 user 1m15.686s 00:07:08.889 sys 0m9.353s 00:07:08.889 13:18:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.889 13:18:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.889 ************************************ 00:07:08.889 END TEST event 00:07:08.889 ************************************ 00:07:08.889 13:18:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:08.889 13:18:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.889 13:18:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.889 13:18:00 -- common/autotest_common.sh@10 -- # set +x 00:07:09.149 ************************************ 00:07:09.149 START TEST thread 00:07:09.149 ************************************ 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:09.149 * Looking for test storage... 00:07:09.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.149 13:18:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.149 13:18:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.149 13:18:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.149 13:18:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.149 13:18:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.149 13:18:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.149 13:18:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.149 13:18:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.149 13:18:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.149 13:18:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.149 13:18:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.149 13:18:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:09.149 13:18:00 thread -- scripts/common.sh@345 -- # : 1 00:07:09.149 13:18:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.149 13:18:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.149 13:18:00 thread -- scripts/common.sh@365 -- # decimal 1 00:07:09.149 13:18:00 thread -- scripts/common.sh@353 -- # local d=1 00:07:09.149 13:18:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.149 13:18:00 thread -- scripts/common.sh@355 -- # echo 1 00:07:09.149 13:18:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.149 13:18:00 thread -- scripts/common.sh@366 -- # decimal 2 00:07:09.149 13:18:00 thread -- scripts/common.sh@353 -- # local d=2 00:07:09.149 13:18:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.149 13:18:00 thread -- scripts/common.sh@355 -- # echo 2 00:07:09.149 13:18:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.149 13:18:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.149 13:18:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.149 13:18:00 thread -- scripts/common.sh@368 -- # return 0 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.149 --rc genhtml_branch_coverage=1 00:07:09.149 --rc genhtml_function_coverage=1 00:07:09.149 --rc genhtml_legend=1 00:07:09.149 --rc geninfo_all_blocks=1 00:07:09.149 --rc geninfo_unexecuted_blocks=1 00:07:09.149 00:07:09.149 ' 00:07:09.149 13:18:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.149 13:18:00 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.149 ************************************ 00:07:09.149 START TEST thread_poller_perf 00:07:09.149 ************************************ 00:07:09.149 13:18:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.149 [2024-10-14 13:18:00.948272] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:09.149 [2024-10-14 13:18:00.948331] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110388 ] 00:07:09.407 [2024-10-14 13:18:01.006317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.407 [2024-10-14 13:18:01.056495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.407 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:10.341 [2024-10-14T11:18:02.194Z] ====================================== 00:07:10.341 [2024-10-14T11:18:02.194Z] busy:2710341567 (cyc) 00:07:10.341 [2024-10-14T11:18:02.194Z] total_run_count: 351000 00:07:10.341 [2024-10-14T11:18:02.194Z] tsc_hz: 2700000000 (cyc) 00:07:10.341 [2024-10-14T11:18:02.194Z] ====================================== 00:07:10.341 [2024-10-14T11:18:02.194Z] poller_cost: 7721 (cyc), 2859 (nsec) 00:07:10.341 00:07:10.341 real 0m1.173s 00:07:10.341 user 0m1.107s 00:07:10.341 sys 0m0.061s 00:07:10.341 13:18:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.341 13:18:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.341 ************************************ 00:07:10.341 END TEST thread_poller_perf 00:07:10.341 ************************************ 00:07:10.341 13:18:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.341 13:18:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:10.342 13:18:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.342 13:18:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.342 ************************************ 00:07:10.342 START TEST thread_poller_perf 00:07:10.342 ************************************ 00:07:10.342 13:18:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.342 [2024-10-14 13:18:02.170103] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:10.342 [2024-10-14 13:18:02.170201] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110608 ] 00:07:10.600 [2024-10-14 13:18:02.230229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.600 [2024-10-14 13:18:02.275988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.600 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:11.535 [2024-10-14T11:18:03.388Z] ====================================== 00:07:11.535 [2024-10-14T11:18:03.388Z] busy:2702841615 (cyc) 00:07:11.535 [2024-10-14T11:18:03.388Z] total_run_count: 4627000 00:07:11.535 [2024-10-14T11:18:03.388Z] tsc_hz: 2700000000 (cyc) 00:07:11.535 [2024-10-14T11:18:03.388Z] ====================================== 00:07:11.535 [2024-10-14T11:18:03.388Z] poller_cost: 584 (cyc), 216 (nsec) 00:07:11.535 00:07:11.535 real 0m1.164s 00:07:11.535 user 0m1.090s 00:07:11.535 sys 0m0.068s 00:07:11.535 13:18:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.535 13:18:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 ************************************ 00:07:11.535 END TEST thread_poller_perf 00:07:11.535 ************************************ 00:07:11.535 13:18:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.535 00:07:11.535 real 0m2.574s 00:07:11.535 user 0m2.337s 00:07:11.535 sys 0m0.241s 00:07:11.535 13:18:03 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.535 13:18:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 ************************************ 00:07:11.535 END TEST thread 00:07:11.535 ************************************ 00:07:11.535 13:18:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:11.535 13:18:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.535 13:18:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.535 13:18:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.535 13:18:03 -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 ************************************ 00:07:11.535 START TEST app_cmdline 00:07:11.535 ************************************ 00:07:11.535 13:18:03 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.795 * Looking for test storage... 00:07:11.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.795 13:18:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.795 --rc genhtml_branch_coverage=1 00:07:11.795 --rc genhtml_function_coverage=1 00:07:11.795 --rc genhtml_legend=1 00:07:11.795 --rc geninfo_all_blocks=1 00:07:11.795 --rc geninfo_unexecuted_blocks=1 00:07:11.795 00:07:11.795 ' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.795 --rc genhtml_branch_coverage=1 00:07:11.795 --rc genhtml_function_coverage=1 00:07:11.795 --rc genhtml_legend=1 00:07:11.795 --rc geninfo_all_blocks=1 00:07:11.795 --rc geninfo_unexecuted_blocks=1 00:07:11.795 00:07:11.795 ' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.795 --rc genhtml_branch_coverage=1 00:07:11.795 --rc genhtml_function_coverage=1 00:07:11.795 --rc genhtml_legend=1 00:07:11.795 --rc geninfo_all_blocks=1 00:07:11.795 --rc geninfo_unexecuted_blocks=1 00:07:11.795 00:07:11.795 ' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.795 --rc genhtml_branch_coverage=1 00:07:11.795 --rc genhtml_function_coverage=1 00:07:11.795 --rc genhtml_legend=1 00:07:11.795 --rc geninfo_all_blocks=1 00:07:11.795 --rc geninfo_unexecuted_blocks=1 00:07:11.795 00:07:11.795 ' 00:07:11.795 13:18:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.795 13:18:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110916 00:07:11.795 13:18:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.795 13:18:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110916 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 110916 ']' 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.795 13:18:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.795 [2024-10-14 13:18:03.584252] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:11.795 [2024-10-14 13:18:03.584337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110916 ] 00:07:11.795 [2024-10-14 13:18:03.643235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.056 [2024-10-14 13:18:03.690674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.317 13:18:03 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.317 13:18:03 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:12.317 13:18:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.577 { 00:07:12.577 "version": "SPDK v25.01-pre git sha1 b6849ff47", 00:07:12.577 "fields": { 00:07:12.577 "major": 25, 00:07:12.577 "minor": 1, 00:07:12.577 "patch": 0, 00:07:12.577 "suffix": "-pre", 00:07:12.577 "commit": "b6849ff47" 00:07:12.577 } 00:07:12.577 } 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.577 13:18:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.577 13:18:04 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.838 request: 00:07:12.838 { 00:07:12.838 "method": "env_dpdk_get_mem_stats", 00:07:12.838 "req_id": 1 00:07:12.838 } 00:07:12.838 Got JSON-RPC error response 00:07:12.838 response: 00:07:12.838 { 00:07:12.838 "code": -32601, 00:07:12.838 "message": "Method not found" 00:07:12.838 } 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.838 13:18:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110916 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 110916 ']' 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 110916 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110916 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110916' 00:07:12.838 killing process with pid 110916 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 110916 00:07:12.838 13:18:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 110916 00:07:13.409 00:07:13.409 real 0m1.574s 00:07:13.409 user 0m1.956s 00:07:13.409 sys 0m0.498s 00:07:13.409 13:18:04 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.409 13:18:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.409 ************************************ 00:07:13.409 END TEST app_cmdline 00:07:13.409 ************************************ 00:07:13.409 13:18:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:13.409 13:18:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.409 13:18:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.409 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.409 ************************************ 00:07:13.409 START TEST version 00:07:13.409 ************************************ 00:07:13.409 13:18:05 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:13.409 * Looking for test storage... 00:07:13.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.409 13:18:05 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.409 13:18:05 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.409 13:18:05 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.409 13:18:05 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.409 13:18:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.409 13:18:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.409 13:18:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.409 13:18:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.409 13:18:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.409 13:18:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.409 13:18:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.409 13:18:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.409 13:18:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.409 13:18:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.409 13:18:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.409 13:18:05 version -- scripts/common.sh@344 -- # case "$op" in 00:07:13.409 13:18:05 version -- scripts/common.sh@345 -- # : 1 00:07:13.409 13:18:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.409 13:18:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.409 13:18:05 version -- scripts/common.sh@365 -- # decimal 1 00:07:13.409 13:18:05 version -- scripts/common.sh@353 -- # local d=1 00:07:13.409 13:18:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.409 13:18:05 version -- scripts/common.sh@355 -- # echo 1 00:07:13.409 13:18:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.409 13:18:05 version -- scripts/common.sh@366 -- # decimal 2 00:07:13.409 13:18:05 version -- scripts/common.sh@353 -- # local d=2 00:07:13.409 13:18:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.409 13:18:05 version -- scripts/common.sh@355 -- # echo 2 00:07:13.409 13:18:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.409 13:18:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.409 13:18:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.410 13:18:05 version -- scripts/common.sh@368 -- # return 0 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.410 --rc genhtml_branch_coverage=1 00:07:13.410 --rc genhtml_function_coverage=1 00:07:13.410 --rc genhtml_legend=1 00:07:13.410 --rc geninfo_all_blocks=1 00:07:13.410 --rc geninfo_unexecuted_blocks=1 00:07:13.410 00:07:13.410 ' 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.410 --rc genhtml_branch_coverage=1 00:07:13.410 --rc genhtml_function_coverage=1 00:07:13.410 --rc genhtml_legend=1 00:07:13.410 --rc geninfo_all_blocks=1 00:07:13.410 --rc geninfo_unexecuted_blocks=1 00:07:13.410 00:07:13.410 ' 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.410 --rc genhtml_branch_coverage=1 00:07:13.410 --rc genhtml_function_coverage=1 00:07:13.410 --rc genhtml_legend=1 00:07:13.410 --rc geninfo_all_blocks=1 00:07:13.410 --rc geninfo_unexecuted_blocks=1 00:07:13.410 00:07:13.410 ' 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.410 --rc genhtml_branch_coverage=1 00:07:13.410 --rc genhtml_function_coverage=1 00:07:13.410 --rc genhtml_legend=1 00:07:13.410 --rc geninfo_all_blocks=1 00:07:13.410 --rc geninfo_unexecuted_blocks=1 00:07:13.410 00:07:13.410 ' 00:07:13.410 13:18:05 version -- app/version.sh@17 -- # get_header_version major 00:07:13.410 13:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # cut -f2 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.410 13:18:05 version -- app/version.sh@17 -- # major=25 00:07:13.410 13:18:05 version -- app/version.sh@18 -- # get_header_version minor 00:07:13.410 13:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # cut -f2 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.410 13:18:05 version -- app/version.sh@18 -- # minor=1 00:07:13.410 13:18:05 version -- app/version.sh@19 -- # get_header_version patch 00:07:13.410 13:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # cut -f2 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.410 13:18:05 version -- app/version.sh@19 -- # patch=0 00:07:13.410 13:18:05 version -- app/version.sh@20 -- # get_header_version suffix 00:07:13.410 13:18:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # cut -f2 00:07:13.410 13:18:05 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.410 13:18:05 version -- app/version.sh@20 -- # suffix=-pre 00:07:13.410 13:18:05 version -- app/version.sh@22 -- # version=25.1 00:07:13.410 13:18:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.410 13:18:05 version -- app/version.sh@28 -- # version=25.1rc0 00:07:13.410 13:18:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:13.410 13:18:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.410 13:18:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:13.410 13:18:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:13.410 00:07:13.410 real 0m0.199s 00:07:13.410 user 0m0.127s 00:07:13.410 sys 0m0.096s 00:07:13.410 13:18:05 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.410 13:18:05 version -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 ************************************ 00:07:13.410 END TEST version 00:07:13.410 ************************************ 00:07:13.410 13:18:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:13.410 13:18:05 -- spdk/autotest.sh@194 -- # uname -s 00:07:13.410 13:18:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:13.410 13:18:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.410 13:18:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.410 13:18:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:13.410 13:18:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.410 13:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.410 13:18:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:13.410 13:18:05 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:13.410 13:18:05 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.410 13:18:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.410 13:18:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.410 13:18:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 ************************************ 00:07:13.669 START TEST nvmf_tcp 00:07:13.669 ************************************ 00:07:13.669 13:18:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.669 * Looking for test storage... 00:07:13.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:13.669 13:18:05 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.669 13:18:05 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.670 13:18:05 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.670 --rc genhtml_branch_coverage=1 00:07:13.670 --rc genhtml_function_coverage=1 00:07:13.670 --rc genhtml_legend=1 00:07:13.670 --rc geninfo_all_blocks=1 00:07:13.670 --rc geninfo_unexecuted_blocks=1 00:07:13.670 00:07:13.670 ' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.670 --rc genhtml_branch_coverage=1 00:07:13.670 --rc genhtml_function_coverage=1 00:07:13.670 --rc genhtml_legend=1 00:07:13.670 --rc geninfo_all_blocks=1 00:07:13.670 --rc geninfo_unexecuted_blocks=1 00:07:13.670 00:07:13.670 ' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.670 --rc genhtml_branch_coverage=1 00:07:13.670 --rc genhtml_function_coverage=1 00:07:13.670 --rc genhtml_legend=1 00:07:13.670 --rc geninfo_all_blocks=1 00:07:13.670 --rc geninfo_unexecuted_blocks=1 00:07:13.670 00:07:13.670 ' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.670 --rc genhtml_branch_coverage=1 00:07:13.670 --rc genhtml_function_coverage=1 00:07:13.670 --rc genhtml_legend=1 00:07:13.670 --rc geninfo_all_blocks=1 00:07:13.670 --rc geninfo_unexecuted_blocks=1 00:07:13.670 00:07:13.670 ' 00:07:13.670 13:18:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.670 13:18:05 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.670 13:18:05 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.670 13:18:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.670 ************************************ 00:07:13.670 START TEST nvmf_target_core 00:07:13.670 ************************************ 00:07:13.670 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:13.670 * Looking for test storage... 00:07:13.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:13.670 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.670 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.670 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.930 --rc genhtml_branch_coverage=1 00:07:13.930 --rc genhtml_function_coverage=1 00:07:13.930 --rc genhtml_legend=1 00:07:13.930 --rc geninfo_all_blocks=1 00:07:13.930 --rc geninfo_unexecuted_blocks=1 00:07:13.930 00:07:13.930 ' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.930 --rc genhtml_branch_coverage=1 00:07:13.930 --rc genhtml_function_coverage=1 00:07:13.930 --rc genhtml_legend=1 00:07:13.930 --rc geninfo_all_blocks=1 00:07:13.930 --rc geninfo_unexecuted_blocks=1 00:07:13.930 00:07:13.930 ' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.930 --rc genhtml_branch_coverage=1 00:07:13.930 --rc genhtml_function_coverage=1 00:07:13.930 --rc genhtml_legend=1 00:07:13.930 --rc geninfo_all_blocks=1 00:07:13.930 --rc geninfo_unexecuted_blocks=1 00:07:13.930 00:07:13.930 ' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.930 --rc genhtml_branch_coverage=1 00:07:13.930 --rc genhtml_function_coverage=1 00:07:13.930 --rc genhtml_legend=1 00:07:13.930 --rc geninfo_all_blocks=1 00:07:13.930 --rc geninfo_unexecuted_blocks=1 00:07:13.930 00:07:13.930 ' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.930 ************************************ 00:07:13.930 START TEST nvmf_abort 00:07:13.930 ************************************ 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:13.930 * Looking for test storage... 00:07:13.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.930 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.931 --rc genhtml_branch_coverage=1 00:07:13.931 --rc genhtml_function_coverage=1 00:07:13.931 --rc genhtml_legend=1 00:07:13.931 --rc geninfo_all_blocks=1 00:07:13.931 --rc geninfo_unexecuted_blocks=1 00:07:13.931 00:07:13.931 ' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.931 --rc genhtml_branch_coverage=1 00:07:13.931 --rc genhtml_function_coverage=1 00:07:13.931 --rc genhtml_legend=1 00:07:13.931 --rc geninfo_all_blocks=1 00:07:13.931 --rc geninfo_unexecuted_blocks=1 00:07:13.931 00:07:13.931 ' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.931 --rc genhtml_branch_coverage=1 00:07:13.931 --rc genhtml_function_coverage=1 00:07:13.931 --rc genhtml_legend=1 00:07:13.931 --rc geninfo_all_blocks=1 00:07:13.931 --rc geninfo_unexecuted_blocks=1 00:07:13.931 00:07:13.931 ' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.931 --rc genhtml_branch_coverage=1 00:07:13.931 --rc genhtml_function_coverage=1 00:07:13.931 --rc genhtml_legend=1 00:07:13.931 --rc geninfo_all_blocks=1 00:07:13.931 --rc geninfo_unexecuted_blocks=1 00:07:13.931 00:07:13.931 ' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.931 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.192 13:18:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:16.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:16.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.732 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:16.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:16.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:16.733 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:16.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:07:16.733 00:07:16.733 --- 10.0.0.2 ping statistics --- 00:07:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.733 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:16.733 00:07:16.733 --- 10.0.0.1 ping statistics --- 00:07:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.733 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=113366 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 113366 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 113366 ']' 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 [2024-10-14 13:18:08.233195] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:16.733 [2024-10-14 13:18:08.233293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.733 [2024-10-14 13:18:08.299395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.733 [2024-10-14 13:18:08.348204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.733 [2024-10-14 13:18:08.348262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.733 [2024-10-14 13:18:08.348292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.733 [2024-10-14 13:18:08.348304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.733 [2024-10-14 13:18:08.348315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.733 [2024-10-14 13:18:08.349826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.733 [2024-10-14 13:18:08.349892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.733 [2024-10-14 13:18:08.349895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 [2024-10-14 13:18:08.491941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 Malloc0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.733 Delay0 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.733 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.734 [2024-10-14 13:18:08.560015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.734 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:16.993 [2024-10-14 13:18:08.664991] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:18.898 Initializing NVMe Controllers 00:07:18.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.899 controller IO queue size 128 less than required 00:07:18.899 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:18.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:18.899 Initialization complete. Launching workers. 00:07:18.899 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28858 00:07:18.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28919, failed to submit 62 00:07:18.899 success 28862, unsuccessful 57, failed 0 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:18.899 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.159 rmmod nvme_tcp 00:07:19.159 rmmod nvme_fabrics 00:07:19.159 rmmod nvme_keyring 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 113366 ']' 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 113366 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 113366 ']' 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 113366 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113366 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113366' 00:07:19.159 killing process with pid 113366 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 113366 00:07:19.159 13:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 113366 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.421 13:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.332 00:07:21.332 real 0m7.488s 00:07:21.332 user 0m10.810s 00:07:21.332 sys 0m2.450s 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.332 ************************************ 00:07:21.332 END TEST nvmf_abort 00:07:21.332 ************************************ 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.332 ************************************ 00:07:21.332 START TEST nvmf_ns_hotplug_stress 00:07:21.332 ************************************ 00:07:21.332 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:21.592 * Looking for test storage... 00:07:21.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.592 --rc genhtml_branch_coverage=1 00:07:21.592 --rc genhtml_function_coverage=1 00:07:21.592 --rc genhtml_legend=1 00:07:21.592 --rc geninfo_all_blocks=1 00:07:21.592 --rc geninfo_unexecuted_blocks=1 00:07:21.592 00:07:21.592 ' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.592 --rc genhtml_branch_coverage=1 00:07:21.592 --rc genhtml_function_coverage=1 00:07:21.592 --rc genhtml_legend=1 00:07:21.592 --rc geninfo_all_blocks=1 00:07:21.592 --rc geninfo_unexecuted_blocks=1 00:07:21.592 00:07:21.592 ' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.592 --rc genhtml_branch_coverage=1 00:07:21.592 --rc genhtml_function_coverage=1 00:07:21.592 --rc genhtml_legend=1 00:07:21.592 --rc geninfo_all_blocks=1 00:07:21.592 --rc geninfo_unexecuted_blocks=1 00:07:21.592 00:07:21.592 ' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.592 --rc genhtml_branch_coverage=1 00:07:21.592 --rc genhtml_function_coverage=1 00:07:21.592 --rc genhtml_legend=1 00:07:21.592 --rc geninfo_all_blocks=1 00:07:21.592 --rc geninfo_unexecuted_blocks=1 00:07:21.592 00:07:21.592 ' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.592 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.593 13:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:24.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:24.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:24.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:24.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.132 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:24.132 00:07:24.132 --- 10.0.0.2 ping statistics --- 00:07:24.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.132 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:07:24.133 00:07:24.133 --- 10.0.0.1 ping statistics --- 00:07:24.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.133 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=115803 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 115803 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 115803 ']' 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.133 [2024-10-14 13:18:15.713435] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:07:24.133 [2024-10-14 13:18:15.713525] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.133 [2024-10-14 13:18:15.778293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.133 [2024-10-14 13:18:15.821568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.133 [2024-10-14 13:18:15.821625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.133 [2024-10-14 13:18:15.821654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.133 [2024-10-14 13:18:15.821665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.133 [2024-10-14 13:18:15.821674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.133 [2024-10-14 13:18:15.823160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.133 [2024-10-14 13:18:15.823253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.133 [2024-10-14 13:18:15.823257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:24.133 13:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:24.392 [2024-10-14 13:18:16.199490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.392 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.651 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.910 [2024-10-14 13:18:16.750170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.168 13:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.427 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:25.685 Malloc0 00:07:25.685 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.944 Delay0 00:07:25.944 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.219 13:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:26.486 NULL1 00:07:26.486 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:26.744 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=116181 00:07:26.744 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:26.744 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:26.744 13:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.122 Read completed with error (sct=0, sc=11) 00:07:28.122 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.122 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:28.123 13:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:28.381 true 00:07:28.381 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:28.381 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.319 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.320 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.320 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:29.887 true 00:07:29.887 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:29.887 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.887 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.145 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:30.145 13:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:30.405 true 00:07:30.665 13:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:30.665 13:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.923 13:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.182 13:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:31.182 13:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:31.440 true 00:07:31.440 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:31.440 13:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.376 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.635 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:32.635 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:32.893 true 00:07:32.893 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:32.893 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.151 13:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.410 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:33.410 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:33.668 true 00:07:33.668 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:33.668 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.927 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.185 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:34.185 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:34.444 true 00:07:34.444 13:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:34.444 13:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.822 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.822 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:35.822 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.081 true 00:07:36.081 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:36.081 13:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.340 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.598 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:36.598 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:36.856 true 00:07:36.856 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:36.856 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.115 13:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.377 13:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:37.377 13:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:37.636 true 00:07:37.636 13:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:37.636 13:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.575 13:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.833 13:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:38.833 13:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:39.092 true 00:07:39.092 13:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:39.092 13:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.350 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.919 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:39.919 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:39.919 true 00:07:39.919 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:39.919 13:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.857 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.116 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:41.116 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:41.374 true 00:07:41.374 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:41.374 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.632 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.891 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:41.891 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:42.149 true 00:07:42.149 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:42.149 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.086 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.344 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:43.345 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:43.603 true 00:07:43.603 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:43.603 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.862 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.120 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:44.120 13:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:44.378 true 00:07:44.378 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:44.378 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.316 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.575 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:45.575 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:45.833 true 00:07:45.833 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:45.833 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.092 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.351 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:46.351 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:46.609 true 00:07:46.609 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:46.609 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.545 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.804 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:47.804 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:48.063 true 00:07:48.063 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:48.063 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.322 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.581 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:48.581 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:48.839 true 00:07:48.839 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:48.839 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.098 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.356 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:49.356 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:49.614 true 00:07:49.614 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:49.614 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.554 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.812 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:50.812 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:51.072 true 00:07:51.072 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:51.072 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.331 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.589 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:51.589 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:51.847 true 00:07:51.847 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:51.848 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.106 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.364 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:52.364 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:52.622 true 00:07:52.622 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:52.622 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.001 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.001 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:54.001 13:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:54.260 true 00:07:54.260 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:54.260 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.518 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.776 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:54.776 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:55.035 true 00:07:55.035 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:55.035 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.293 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.551 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:55.551 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:55.809 true 00:07:55.809 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:55.810 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.190 Initializing NVMe Controllers 00:07:57.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.190 Controller IO queue size 128, less than required. 00:07:57.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.190 Controller IO queue size 128, less than required. 00:07:57.190 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:57.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:57.190 Initialization complete. Launching workers. 00:07:57.190 ======================================================== 00:07:57.190 Latency(us) 00:07:57.190 Device Information : IOPS MiB/s Average min max 00:07:57.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 653.03 0.32 87645.92 3319.50 1126810.43 00:07:57.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8974.16 4.38 14263.72 3422.22 449637.35 00:07:57.190 ======================================================== 00:07:57.190 Total : 9627.19 4.70 19241.37 3319.50 1126810.43 00:07:57.190 00:07:57.190 13:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.190 13:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:57.190 13:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:57.449 true 00:07:57.449 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 116181 00:07:57.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (116181) - No such process 00:07:57.449 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 116181 00:07:57.449 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.708 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.967 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:57.967 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:57.967 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:57.967 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.967 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:58.226 null0 00:07:58.226 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.226 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.226 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:58.485 null1 00:07:58.485 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.485 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.485 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:58.743 null2 00:07:58.743 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.743 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.744 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.002 null3 00:07:59.002 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.002 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.002 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.261 null4 00:07:59.261 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.261 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.261 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:59.520 null5 00:07:59.520 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.520 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.520 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:59.778 null6 00:07:59.778 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.778 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.778 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:00.037 null7 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:00.037 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 120269 120270 120272 120274 120276 120278 120280 120282 00:08:00.296 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.555 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.814 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.073 13:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.331 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.332 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.590 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.849 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.107 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.366 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.366 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.366 13:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.366 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.366 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.366 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.366 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.366 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.625 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.885 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.143 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.144 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.402 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.660 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.919 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.178 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.178 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.178 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.178 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.178 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.437 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.696 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.955 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.213 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.213 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.214 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.472 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.731 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.296 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.297 rmmod nvme_tcp 00:08:06.297 rmmod nvme_fabrics 00:08:06.297 rmmod nvme_keyring 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 115803 ']' 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 115803 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 115803 ']' 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 115803 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.297 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115803 00:08:06.297 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:06.297 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:06.297 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115803' 00:08:06.297 killing process with pid 115803 00:08:06.297 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 115803 00:08:06.297 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 115803 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.556 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.468 00:08:08.468 real 0m47.089s 00:08:08.468 user 3m37.466s 00:08:08.468 sys 0m16.505s 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:08.468 ************************************ 00:08:08.468 END TEST nvmf_ns_hotplug_stress 00:08:08.468 ************************************ 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.468 ************************************ 00:08:08.468 START TEST nvmf_delete_subsystem 00:08:08.468 ************************************ 00:08:08.468 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.727 * Looking for test storage... 00:08:08.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.727 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.728 --rc genhtml_branch_coverage=1 00:08:08.728 --rc genhtml_function_coverage=1 00:08:08.728 --rc genhtml_legend=1 00:08:08.728 --rc geninfo_all_blocks=1 00:08:08.728 --rc geninfo_unexecuted_blocks=1 00:08:08.728 00:08:08.728 ' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.728 --rc genhtml_branch_coverage=1 00:08:08.728 --rc genhtml_function_coverage=1 00:08:08.728 --rc genhtml_legend=1 00:08:08.728 --rc geninfo_all_blocks=1 00:08:08.728 --rc geninfo_unexecuted_blocks=1 00:08:08.728 00:08:08.728 ' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.728 --rc genhtml_branch_coverage=1 00:08:08.728 --rc genhtml_function_coverage=1 00:08:08.728 --rc genhtml_legend=1 00:08:08.728 --rc geninfo_all_blocks=1 00:08:08.728 --rc geninfo_unexecuted_blocks=1 00:08:08.728 00:08:08.728 ' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.728 --rc genhtml_branch_coverage=1 00:08:08.728 --rc genhtml_function_coverage=1 00:08:08.728 --rc genhtml_legend=1 00:08:08.728 --rc geninfo_all_blocks=1 00:08:08.728 --rc geninfo_unexecuted_blocks=1 00:08:08.728 00:08:08.728 ' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.728 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:11.263 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:11.263 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:11.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:11.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:11.263 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:08:11.264 00:08:11.264 --- 10.0.0.2 ping statistics --- 00:08:11.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.264 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:08:11.264 00:08:11.264 --- 10.0.0.1 ping statistics --- 00:08:11.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.264 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=123166 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 123166 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 123166 ']' 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.264 13:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.264 [2024-10-14 13:19:02.899690] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:11.264 [2024-10-14 13:19:02.899788] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.264 [2024-10-14 13:19:02.964770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.264 [2024-10-14 13:19:03.011577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.264 [2024-10-14 13:19:03.011633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.264 [2024-10-14 13:19:03.011647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.264 [2024-10-14 13:19:03.011658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.264 [2024-10-14 13:19:03.011668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.264 [2024-10-14 13:19:03.013049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.264 [2024-10-14 13:19:03.013055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.522 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.522 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 [2024-10-14 13:19:03.159167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 [2024-10-14 13:19:03.175370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 NULL1 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 Delay0 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=123201 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:11.523 13:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:11.523 [2024-10-14 13:19:03.250224] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:13.420 13:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.420 13:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.420 13:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 starting I/O failed: -6 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Read completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.678 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 [2024-10-14 13:19:05.373382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cdc00d640 is same with the state(6) to be set 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 starting I/O failed: -6 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Write completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:13.679 Read completed with error (sct=0, sc=8) 00:08:14.614 [2024-10-14 13:19:06.345718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadfd00 is same with the state(6) to be set 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 [2024-10-14 13:19:06.375409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cdc00d310 is same with the state(6) to be set 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 [2024-10-14 13:19:06.375869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae25c0 is same with the state(6) to be set 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 [2024-10-14 13:19:06.376119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae20b0 is same with the state(6) to be set 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Read completed with error (sct=0, sc=8) 00:08:14.614 Write completed with error (sct=0, sc=8) 00:08:14.614 [2024-10-14 13:19:06.376376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae1ed0 is same with the state(6) to be set 00:08:14.614 Initializing NVMe Controllers 00:08:14.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.614 Controller IO queue size 128, less than required. 00:08:14.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:14.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:14.615 Initialization complete. Launching workers. 00:08:14.615 ======================================================== 00:08:14.615 Latency(us) 00:08:14.615 Device Information : IOPS MiB/s Average min max 00:08:14.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.01 0.09 955920.34 910.51 1012674.22 00:08:14.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.82 0.07 894900.95 454.29 1014221.01 00:08:14.615 ======================================================== 00:08:14.615 Total : 341.83 0.17 928997.42 454.29 1014221.01 00:08:14.615 00:08:14.615 [2024-10-14 13:19:06.377336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadfd00 (9): Bad file descriptor 00:08:14.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:14.615 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.615 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:14.615 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 123201 00:08:14.615 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 123201 00:08:15.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (123201) - No such process 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 123201 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 123201 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 123201 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.181 [2024-10-14 13:19:06.900619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123609 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:15.181 13:19:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.181 [2024-10-14 13:19:06.962990] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:15.747 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.747 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:15.747 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.318 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.318 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:16.318 13:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.575 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.575 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:16.575 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.141 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.141 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:17.141 13:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.706 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.706 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:17.706 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.273 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.273 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:18.273 13:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.530 Initializing NVMe Controllers 00:08:18.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.530 Controller IO queue size 128, less than required. 00:08:18.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:18.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:18.531 Initialization complete. Launching workers. 00:08:18.531 ======================================================== 00:08:18.531 Latency(us) 00:08:18.531 Device Information : IOPS MiB/s Average min max 00:08:18.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005252.65 1000168.28 1043423.68 00:08:18.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004273.96 1000173.44 1010960.90 00:08:18.531 ======================================================== 00:08:18.531 Total : 256.00 0.12 1004763.30 1000168.28 1043423.68 00:08:18.531 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123609 00:08:18.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123609) - No such process 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123609 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.789 rmmod nvme_tcp 00:08:18.789 rmmod nvme_fabrics 00:08:18.789 rmmod nvme_keyring 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 123166 ']' 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 123166 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 123166 ']' 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 123166 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123166 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123166' 00:08:18.789 killing process with pid 123166 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 123166 00:08:18.789 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 123166 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.049 13:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.962 00:08:20.962 real 0m12.462s 00:08:20.962 user 0m27.744s 00:08:20.962 sys 0m3.146s 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 ************************************ 00:08:20.962 END TEST nvmf_delete_subsystem 00:08:20.962 ************************************ 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.962 13:19:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.221 ************************************ 00:08:21.221 START TEST nvmf_host_management 00:08:21.221 ************************************ 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:21.221 * Looking for test storage... 00:08:21.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.221 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.222 --rc genhtml_branch_coverage=1 00:08:21.222 --rc genhtml_function_coverage=1 00:08:21.222 --rc genhtml_legend=1 00:08:21.222 --rc geninfo_all_blocks=1 00:08:21.222 --rc geninfo_unexecuted_blocks=1 00:08:21.222 00:08:21.222 ' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.222 --rc genhtml_branch_coverage=1 00:08:21.222 --rc genhtml_function_coverage=1 00:08:21.222 --rc genhtml_legend=1 00:08:21.222 --rc geninfo_all_blocks=1 00:08:21.222 --rc geninfo_unexecuted_blocks=1 00:08:21.222 00:08:21.222 ' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.222 --rc genhtml_branch_coverage=1 00:08:21.222 --rc genhtml_function_coverage=1 00:08:21.222 --rc genhtml_legend=1 00:08:21.222 --rc geninfo_all_blocks=1 00:08:21.222 --rc geninfo_unexecuted_blocks=1 00:08:21.222 00:08:21.222 ' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.222 --rc genhtml_branch_coverage=1 00:08:21.222 --rc genhtml_function_coverage=1 00:08:21.222 --rc genhtml_legend=1 00:08:21.222 --rc geninfo_all_blocks=1 00:08:21.222 --rc geninfo_unexecuted_blocks=1 00:08:21.222 00:08:21.222 ' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:21.222 13:19:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.222 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.223 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.223 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:21.223 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:21.223 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.223 13:19:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.761 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.761 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.761 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:08:23.762 00:08:23.762 --- 10.0.0.2 ping statistics --- 00:08:23.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.762 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:23.762 00:08:23.762 --- 10.0.0.1 ping statistics --- 00:08:23.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.762 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=126079 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 126079 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 126079 ']' 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.762 [2024-10-14 13:19:15.366344] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:23.762 [2024-10-14 13:19:15.366428] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.762 [2024-10-14 13:19:15.431838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.762 [2024-10-14 13:19:15.476394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.762 [2024-10-14 13:19:15.476454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.762 [2024-10-14 13:19:15.476481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.762 [2024-10-14 13:19:15.476492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.762 [2024-10-14 13:19:15.476501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.762 [2024-10-14 13:19:15.478028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.762 [2024-10-14 13:19:15.478101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.762 [2024-10-14 13:19:15.478174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:23.762 [2024-10-14 13:19:15.478177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.762 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 [2024-10-14 13:19:15.616632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 Malloc0 00:08:24.021 [2024-10-14 13:19:15.689670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=126139 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 126139 /var/tmp/bdevperf.sock 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 126139 ']' 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:24.021 { 00:08:24.021 "params": { 00:08:24.021 "name": "Nvme$subsystem", 00:08:24.021 "trtype": "$TEST_TRANSPORT", 00:08:24.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.021 "adrfam": "ipv4", 00:08:24.021 "trsvcid": "$NVMF_PORT", 00:08:24.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.021 "hdgst": ${hdgst:-false}, 00:08:24.021 "ddgst": ${ddgst:-false} 00:08:24.021 }, 00:08:24.021 "method": "bdev_nvme_attach_controller" 00:08:24.021 } 00:08:24.021 EOF 00:08:24.021 )") 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:24.021 13:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:24.021 "params": { 00:08:24.021 "name": "Nvme0", 00:08:24.021 "trtype": "tcp", 00:08:24.021 "traddr": "10.0.0.2", 00:08:24.021 "adrfam": "ipv4", 00:08:24.021 "trsvcid": "4420", 00:08:24.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.021 "hdgst": false, 00:08:24.021 "ddgst": false 00:08:24.021 }, 00:08:24.021 "method": "bdev_nvme_attach_controller" 00:08:24.021 }' 00:08:24.021 [2024-10-14 13:19:15.764991] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:24.021 [2024-10-14 13:19:15.765064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126139 ] 00:08:24.021 [2024-10-14 13:19:15.827501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.021 [2024-10-14 13:19:15.874479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.588 Running I/O for 10 seconds... 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:24.588 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.856 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.856 [2024-10-14 13:19:16.613568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.613991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498670 is same with the state(6) to be set 00:08:24.856 [2024-10-14 13:19:16.614265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.856 [2024-10-14 13:19:16.614569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.856 [2024-10-14 13:19:16.614583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.614981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.614997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.857 [2024-10-14 13:19:16.615674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.857 [2024-10-14 13:19:16.615689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.615972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.615987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.858 [2024-10-14 13:19:16.616224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.858 [2024-10-14 13:19:16.616307] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a23e80 was disconnected and freed. reset controller. 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.858 [2024-10-14 13:19:16.617427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.858 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:24.858 00:08:24.858 Latency(us) 00:08:24.858 [2024-10-14T11:19:16.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.858 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:24.858 Verification LBA range: start 0x0 length 0x400 00:08:24.858 Nvme0n1 : 0.40 1599.38 99.96 159.94 0.00 35317.43 4223.43 34175.81 00:08:24.858 [2024-10-14T11:19:16.711Z] =================================================================================================================== 00:08:24.858 [2024-10-14T11:19:16.711Z] Total : 1599.38 99.96 159.94 0.00 35317.43 4223.43 34175.81 00:08:24.858 [2024-10-14 13:19:16.619332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.858 [2024-10-14 13:19:16.619361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180ae00 (9): Bad file descriptor 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.858 13:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.858 [2024-10-14 13:19:16.665434] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 126139 00:08:25.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (126139) - No such process 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:25.793 { 00:08:25.793 "params": { 00:08:25.793 "name": "Nvme$subsystem", 00:08:25.793 "trtype": "$TEST_TRANSPORT", 00:08:25.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.793 "adrfam": "ipv4", 00:08:25.793 "trsvcid": "$NVMF_PORT", 00:08:25.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.793 "hdgst": ${hdgst:-false}, 00:08:25.793 "ddgst": ${ddgst:-false} 00:08:25.793 }, 00:08:25.793 "method": "bdev_nvme_attach_controller" 00:08:25.793 } 00:08:25.793 EOF 00:08:25.793 )") 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:25.793 13:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:25.793 "params": { 00:08:25.793 "name": "Nvme0", 00:08:25.793 "trtype": "tcp", 00:08:25.793 "traddr": "10.0.0.2", 00:08:25.793 "adrfam": "ipv4", 00:08:25.793 "trsvcid": "4420", 00:08:25.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.793 "hdgst": false, 00:08:25.793 "ddgst": false 00:08:25.793 }, 00:08:25.793 "method": "bdev_nvme_attach_controller" 00:08:25.793 }' 00:08:26.052 [2024-10-14 13:19:17.676631] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:26.052 [2024-10-14 13:19:17.676704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126412 ] 00:08:26.052 [2024-10-14 13:19:17.738021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.052 [2024-10-14 13:19:17.786801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.309 Running I/O for 1 seconds... 00:08:27.243 1664.00 IOPS, 104.00 MiB/s 00:08:27.243 Latency(us) 00:08:27.243 [2024-10-14T11:19:19.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.243 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:27.243 Verification LBA range: start 0x0 length 0x400 00:08:27.243 Nvme0n1 : 1.03 1684.60 105.29 0.00 0.00 37376.09 5655.51 33010.73 00:08:27.243 [2024-10-14T11:19:19.096Z] =================================================================================================================== 00:08:27.243 [2024-10-14T11:19:19.096Z] Total : 1684.60 105.29 0.00 0.00 37376.09 5655.51 33010.73 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.501 rmmod nvme_tcp 00:08:27.501 rmmod nvme_fabrics 00:08:27.501 rmmod nvme_keyring 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 126079 ']' 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 126079 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 126079 ']' 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 126079 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126079 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126079' 00:08:27.501 killing process with pid 126079 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 126079 00:08:27.501 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 126079 00:08:27.760 [2024-10-14 13:19:19.528759] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:27.760 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.761 13:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:30.304 00:08:30.304 real 0m8.776s 00:08:30.304 user 0m19.404s 00:08:30.304 sys 0m2.748s 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.304 ************************************ 00:08:30.304 END TEST nvmf_host_management 00:08:30.304 ************************************ 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.304 ************************************ 00:08:30.304 START TEST nvmf_lvol 00:08:30.304 ************************************ 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:30.304 * Looking for test storage... 00:08:30.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.304 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.305 --rc genhtml_branch_coverage=1 00:08:30.305 --rc genhtml_function_coverage=1 00:08:30.305 --rc genhtml_legend=1 00:08:30.305 --rc geninfo_all_blocks=1 00:08:30.305 --rc geninfo_unexecuted_blocks=1 00:08:30.305 00:08:30.305 ' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.305 --rc genhtml_branch_coverage=1 00:08:30.305 --rc genhtml_function_coverage=1 00:08:30.305 --rc genhtml_legend=1 00:08:30.305 --rc geninfo_all_blocks=1 00:08:30.305 --rc geninfo_unexecuted_blocks=1 00:08:30.305 00:08:30.305 ' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.305 --rc genhtml_branch_coverage=1 00:08:30.305 --rc genhtml_function_coverage=1 00:08:30.305 --rc genhtml_legend=1 00:08:30.305 --rc geninfo_all_blocks=1 00:08:30.305 --rc geninfo_unexecuted_blocks=1 00:08:30.305 00:08:30.305 ' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.305 --rc genhtml_branch_coverage=1 00:08:30.305 --rc genhtml_function_coverage=1 00:08:30.305 --rc genhtml_legend=1 00:08:30.305 --rc geninfo_all_blocks=1 00:08:30.305 --rc geninfo_unexecuted_blocks=1 00:08:30.305 00:08:30.305 ' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.305 13:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.212 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.213 13:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.213 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.213 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.213 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.213 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.472 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:08:32.473 00:08:32.473 --- 10.0.0.2 ping statistics --- 00:08:32.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.473 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:32.473 00:08:32.473 --- 10.0.0.1 ping statistics --- 00:08:32.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.473 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=128511 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 128511 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 128511 ']' 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.473 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.473 [2024-10-14 13:19:24.186277] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:32.473 [2024-10-14 13:19:24.186381] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.473 [2024-10-14 13:19:24.254104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.473 [2024-10-14 13:19:24.301718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.473 [2024-10-14 13:19:24.301771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.473 [2024-10-14 13:19:24.301798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.473 [2024-10-14 13:19:24.301809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.473 [2024-10-14 13:19:24.301818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.473 [2024-10-14 13:19:24.303291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.473 [2024-10-14 13:19:24.306149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.473 [2024-10-14 13:19:24.306155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.745 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.746 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.010 [2024-10-14 13:19:24.693236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.010 13:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.268 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:33.269 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.527 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:33.527 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.785 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:34.043 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=956ea812-ff2b-4b9a-84bb-98b487d8bff7 00:08:34.043 13:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 956ea812-ff2b-4b9a-84bb-98b487d8bff7 lvol 20 00:08:34.302 13:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b80968f7-4300-42d4-9c1f-4d9d40656015 00:08:34.302 13:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.559 13:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b80968f7-4300-42d4-9c1f-4d9d40656015 00:08:34.817 13:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:35.075 [2024-10-14 13:19:26.900949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.075 13:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.641 13:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128937 00:08:35.641 13:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:35.641 13:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:36.576 13:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b80968f7-4300-42d4-9c1f-4d9d40656015 MY_SNAPSHOT 00:08:36.834 13:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=611df2b4-c71d-4115-b022-597a91f69200 00:08:36.834 13:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b80968f7-4300-42d4-9c1f-4d9d40656015 30 00:08:37.092 13:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 611df2b4-c71d-4115-b022-597a91f69200 MY_CLONE 00:08:37.350 13:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5a44dc69-c5a4-4512-af67-697021c74a35 00:08:37.350 13:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5a44dc69-c5a4-4512-af67-697021c74a35 00:08:38.285 13:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128937 00:08:46.399 Initializing NVMe Controllers 00:08:46.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:46.399 Controller IO queue size 128, less than required. 00:08:46.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:46.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:46.400 Initialization complete. Launching workers. 00:08:46.400 ======================================================== 00:08:46.400 Latency(us) 00:08:46.400 Device Information : IOPS MiB/s Average min max 00:08:46.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10671.30 41.68 11995.27 2111.85 64335.97 00:08:46.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10581.30 41.33 12105.52 2121.93 79919.94 00:08:46.400 ======================================================== 00:08:46.400 Total : 21252.60 83.02 12050.16 2111.85 79919.94 00:08:46.400 00:08:46.400 13:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.400 13:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b80968f7-4300-42d4-9c1f-4d9d40656015 00:08:46.400 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 956ea812-ff2b-4b9a-84bb-98b487d8bff7 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.658 rmmod nvme_tcp 00:08:46.658 rmmod nvme_fabrics 00:08:46.658 rmmod nvme_keyring 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 128511 ']' 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 128511 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 128511 ']' 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 128511 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.658 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128511 00:08:46.916 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.916 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.916 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128511' 00:08:46.916 killing process with pid 128511 00:08:46.916 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 128511 00:08:46.916 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 128511 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.176 13:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.083 00:08:49.083 real 0m19.164s 00:08:49.083 user 1m5.571s 00:08:49.083 sys 0m5.479s 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.083 ************************************ 00:08:49.083 END TEST nvmf_lvol 00:08:49.083 ************************************ 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.083 ************************************ 00:08:49.083 START TEST nvmf_lvs_grow 00:08:49.083 ************************************ 00:08:49.083 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:49.083 * Looking for test storage... 00:08:49.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.084 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.084 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.084 13:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.342 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.343 --rc genhtml_branch_coverage=1 00:08:49.343 --rc genhtml_function_coverage=1 00:08:49.343 --rc genhtml_legend=1 00:08:49.343 --rc geninfo_all_blocks=1 00:08:49.343 --rc geninfo_unexecuted_blocks=1 00:08:49.343 00:08:49.343 ' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.343 --rc genhtml_branch_coverage=1 00:08:49.343 --rc genhtml_function_coverage=1 00:08:49.343 --rc genhtml_legend=1 00:08:49.343 --rc geninfo_all_blocks=1 00:08:49.343 --rc geninfo_unexecuted_blocks=1 00:08:49.343 00:08:49.343 ' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.343 --rc genhtml_branch_coverage=1 00:08:49.343 --rc genhtml_function_coverage=1 00:08:49.343 --rc genhtml_legend=1 00:08:49.343 --rc geninfo_all_blocks=1 00:08:49.343 --rc geninfo_unexecuted_blocks=1 00:08:49.343 00:08:49.343 ' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.343 --rc genhtml_branch_coverage=1 00:08:49.343 --rc genhtml_function_coverage=1 00:08:49.343 --rc genhtml_legend=1 00:08:49.343 --rc geninfo_all_blocks=1 00:08:49.343 --rc geninfo_unexecuted_blocks=1 00:08:49.343 00:08:49.343 ' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.343 13:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.879 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:51.880 00:08:51.880 --- 10.0.0.2 ping statistics --- 00:08:51.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.880 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:08:51.880 00:08:51.880 --- 10.0.0.1 ping statistics --- 00:08:51.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.880 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=132227 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 132227 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 132227 ']' 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 [2024-10-14 13:19:43.426990] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:51.880 [2024-10-14 13:19:43.427063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.880 [2024-10-14 13:19:43.490457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.880 [2024-10-14 13:19:43.536752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.880 [2024-10-14 13:19:43.536800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.880 [2024-10-14 13:19:43.536828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.880 [2024-10-14 13:19:43.536839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.880 [2024-10-14 13:19:43.536848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.880 [2024-10-14 13:19:43.537535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.880 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:52.138 [2024-10-14 13:19:43.932091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.138 ************************************ 00:08:52.138 START TEST lvs_grow_clean 00:08:52.138 ************************************ 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.138 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.396 13:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.654 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:52.654 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:52.912 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a58bef46-9f4b-4fce-a24b-35d13fc59112 00:08:52.912 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:08:52.912 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:53.171 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:53.171 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:53.171 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a58bef46-9f4b-4fce-a24b-35d13fc59112 lvol 150 00:08:53.430 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d423af81-07ba-497a-b046-7693d7c13a71 00:08:53.430 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.430 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:53.688 [2024-10-14 13:19:45.334499] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:53.688 [2024-10-14 13:19:45.334610] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:53.688 true 00:08:53.688 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:08:53.688 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:53.947 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:53.947 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.205 13:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d423af81-07ba-497a-b046-7693d7c13a71 00:08:54.464 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.722 [2024-10-14 13:19:46.421849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.722 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132665 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132665 /var/tmp/bdevperf.sock 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 132665 ']' 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.982 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:54.982 [2024-10-14 13:19:46.751376] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:08:54.982 [2024-10-14 13:19:46.751463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132665 ] 00:08:54.982 [2024-10-14 13:19:46.809187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.240 [2024-10-14 13:19:46.857089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.240 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.240 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:55.240 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.498 Nvme0n1 00:08:55.498 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.755 [ 00:08:55.755 { 00:08:55.755 "name": "Nvme0n1", 00:08:55.755 "aliases": [ 00:08:55.755 "d423af81-07ba-497a-b046-7693d7c13a71" 00:08:55.755 ], 00:08:55.755 "product_name": "NVMe disk", 00:08:55.755 "block_size": 4096, 00:08:55.755 "num_blocks": 38912, 00:08:55.755 "uuid": "d423af81-07ba-497a-b046-7693d7c13a71", 00:08:55.755 "numa_id": 0, 00:08:55.755 "assigned_rate_limits": { 00:08:55.755 "rw_ios_per_sec": 0, 00:08:55.755 "rw_mbytes_per_sec": 0, 00:08:55.755 "r_mbytes_per_sec": 0, 00:08:55.755 "w_mbytes_per_sec": 0 00:08:55.755 }, 00:08:55.755 "claimed": false, 00:08:55.755 "zoned": false, 00:08:55.755 "supported_io_types": { 00:08:55.755 "read": true, 00:08:55.755 "write": true, 00:08:55.755 "unmap": true, 00:08:55.755 "flush": true, 00:08:55.755 "reset": true, 00:08:55.755 "nvme_admin": true, 00:08:55.755 "nvme_io": true, 00:08:55.755 "nvme_io_md": false, 00:08:55.755 "write_zeroes": true, 00:08:55.755 "zcopy": false, 00:08:55.755 "get_zone_info": false, 00:08:55.755 "zone_management": false, 00:08:55.755 "zone_append": false, 00:08:55.755 "compare": true, 00:08:55.755 "compare_and_write": true, 00:08:55.755 "abort": true, 00:08:55.755 "seek_hole": false, 00:08:55.755 "seek_data": false, 00:08:55.755 "copy": true, 00:08:55.755 "nvme_iov_md": false 00:08:55.755 }, 00:08:55.755 "memory_domains": [ 00:08:55.755 { 00:08:55.755 "dma_device_id": "system", 00:08:55.755 "dma_device_type": 1 00:08:55.755 } 00:08:55.755 ], 00:08:55.755 "driver_specific": { 00:08:55.755 "nvme": [ 00:08:55.755 { 00:08:55.755 "trid": { 00:08:55.755 "trtype": "TCP", 00:08:55.755 "adrfam": "IPv4", 00:08:55.755 "traddr": "10.0.0.2", 00:08:55.755 "trsvcid": "4420", 00:08:55.755 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.755 }, 00:08:55.755 "ctrlr_data": { 00:08:55.755 "cntlid": 1, 00:08:55.756 "vendor_id": "0x8086", 00:08:55.756 "model_number": "SPDK bdev Controller", 00:08:55.756 "serial_number": "SPDK0", 00:08:55.756 "firmware_revision": "25.01", 00:08:55.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.756 "oacs": { 00:08:55.756 "security": 0, 00:08:55.756 "format": 0, 00:08:55.756 "firmware": 0, 00:08:55.756 "ns_manage": 0 00:08:55.756 }, 00:08:55.756 "multi_ctrlr": true, 00:08:55.756 "ana_reporting": false 00:08:55.756 }, 00:08:55.756 "vs": { 00:08:55.756 "nvme_version": "1.3" 00:08:55.756 }, 00:08:55.756 "ns_data": { 00:08:55.756 "id": 1, 00:08:55.756 "can_share": true 00:08:55.756 } 00:08:55.756 } 00:08:55.756 ], 00:08:55.756 "mp_policy": "active_passive" 00:08:55.756 } 00:08:55.756 } 00:08:55.756 ] 00:08:55.756 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132796 00:08:55.756 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.756 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:56.013 Running I/O for 10 seconds... 00:08:56.948 Latency(us) 00:08:56.948 [2024-10-14T11:19:48.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.948 Nvme0n1 : 1.00 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:08:56.948 [2024-10-14T11:19:48.801Z] =================================================================================================================== 00:08:56.948 [2024-10-14T11:19:48.801Z] Total : 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:08:56.948 00:08:57.883 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:08:57.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.883 Nvme0n1 : 2.00 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:08:57.883 [2024-10-14T11:19:49.736Z] =================================================================================================================== 00:08:57.883 [2024-10-14T11:19:49.736Z] Total : 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:08:57.883 00:08:58.142 true 00:08:58.142 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:08:58.142 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:58.400 13:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:58.400 13:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:58.400 13:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132796 00:08:58.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.967 Nvme0n1 : 3.00 15283.67 59.70 0.00 0.00 0.00 0.00 0.00 00:08:58.967 [2024-10-14T11:19:50.820Z] =================================================================================================================== 00:08:58.967 [2024-10-14T11:19:50.820Z] Total : 15283.67 59.70 0.00 0.00 0.00 0.00 0.00 00:08:58.967 00:08:59.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.903 Nvme0n1 : 4.00 15399.75 60.16 0.00 0.00 0.00 0.00 0.00 00:08:59.903 [2024-10-14T11:19:51.756Z] =================================================================================================================== 00:08:59.903 [2024-10-14T11:19:51.756Z] Total : 15399.75 60.16 0.00 0.00 0.00 0.00 0.00 00:08:59.903 00:09:01.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.280 Nvme0n1 : 5.00 15476.20 60.45 0.00 0.00 0.00 0.00 0.00 00:09:01.280 [2024-10-14T11:19:53.133Z] =================================================================================================================== 00:09:01.280 [2024-10-14T11:19:53.133Z] Total : 15476.20 60.45 0.00 0.00 0.00 0.00 0.00 00:09:01.280 00:09:01.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.846 Nvme0n1 : 6.00 15532.33 60.67 0.00 0.00 0.00 0.00 0.00 00:09:01.846 [2024-10-14T11:19:53.699Z] =================================================================================================================== 00:09:01.846 [2024-10-14T11:19:53.700Z] Total : 15532.33 60.67 0.00 0.00 0.00 0.00 0.00 00:09:01.847 00:09:03.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.222 Nvme0n1 : 7.00 15581.29 60.86 0.00 0.00 0.00 0.00 0.00 00:09:03.222 [2024-10-14T11:19:55.075Z] =================================================================================================================== 00:09:03.222 [2024-10-14T11:19:55.075Z] Total : 15581.29 60.86 0.00 0.00 0.00 0.00 0.00 00:09:03.222 00:09:04.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.157 Nvme0n1 : 8.00 15618.00 61.01 0.00 0.00 0.00 0.00 0.00 00:09:04.157 [2024-10-14T11:19:56.010Z] =================================================================================================================== 00:09:04.157 [2024-10-14T11:19:56.010Z] Total : 15618.00 61.01 0.00 0.00 0.00 0.00 0.00 00:09:04.157 00:09:05.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.093 Nvme0n1 : 9.00 15639.56 61.09 0.00 0.00 0.00 0.00 0.00 00:09:05.093 [2024-10-14T11:19:56.946Z] =================================================================================================================== 00:09:05.093 [2024-10-14T11:19:56.946Z] Total : 15639.56 61.09 0.00 0.00 0.00 0.00 0.00 00:09:05.093 00:09:06.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.028 Nvme0n1 : 10.00 15663.20 61.18 0.00 0.00 0.00 0.00 0.00 00:09:06.028 [2024-10-14T11:19:57.881Z] =================================================================================================================== 00:09:06.028 [2024-10-14T11:19:57.881Z] Total : 15663.20 61.18 0.00 0.00 0.00 0.00 0.00 00:09:06.028 00:09:06.028 00:09:06.028 Latency(us) 00:09:06.028 [2024-10-14T11:19:57.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.029 Nvme0n1 : 10.00 15663.37 61.19 0.00 0.00 8166.88 4320.52 15922.82 00:09:06.029 [2024-10-14T11:19:57.882Z] =================================================================================================================== 00:09:06.029 [2024-10-14T11:19:57.882Z] Total : 15663.37 61.19 0.00 0.00 8166.88 4320.52 15922.82 00:09:06.029 { 00:09:06.029 "results": [ 00:09:06.029 { 00:09:06.029 "job": "Nvme0n1", 00:09:06.029 "core_mask": "0x2", 00:09:06.029 "workload": "randwrite", 00:09:06.029 "status": "finished", 00:09:06.029 "queue_depth": 128, 00:09:06.029 "io_size": 4096, 00:09:06.029 "runtime": 10.003976, 00:09:06.029 "iops": 15663.372243196105, 00:09:06.029 "mibps": 61.185047824984785, 00:09:06.029 "io_failed": 0, 00:09:06.029 "io_timeout": 0, 00:09:06.029 "avg_latency_us": 8166.881097931547, 00:09:06.029 "min_latency_us": 4320.521481481482, 00:09:06.029 "max_latency_us": 15922.82074074074 00:09:06.029 } 00:09:06.029 ], 00:09:06.029 "core_count": 1 00:09:06.029 } 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132665 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 132665 ']' 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 132665 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132665 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132665' 00:09:06.029 killing process with pid 132665 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 132665 00:09:06.029 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.029 00:09:06.029 Latency(us) 00:09:06.029 [2024-10-14T11:19:57.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.029 [2024-10-14T11:19:57.882Z] =================================================================================================================== 00:09:06.029 [2024-10-14T11:19:57.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.029 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 132665 00:09:06.288 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.547 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.805 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:06.805 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:07.064 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:07.064 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:07.064 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.323 [2024-10-14 13:19:59.011925] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:07.323 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:07.581 request: 00:09:07.581 { 00:09:07.581 "uuid": "a58bef46-9f4b-4fce-a24b-35d13fc59112", 00:09:07.581 "method": "bdev_lvol_get_lvstores", 00:09:07.581 "req_id": 1 00:09:07.581 } 00:09:07.581 Got JSON-RPC error response 00:09:07.581 response: 00:09:07.581 { 00:09:07.581 "code": -19, 00:09:07.581 "message": "No such device" 00:09:07.581 } 00:09:07.581 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:07.581 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.581 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:07.581 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.581 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.839 aio_bdev 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d423af81-07ba-497a-b046-7693d7c13a71 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d423af81-07ba-497a-b046-7693d7c13a71 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.839 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.098 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d423af81-07ba-497a-b046-7693d7c13a71 -t 2000 00:09:08.356 [ 00:09:08.356 { 00:09:08.356 "name": "d423af81-07ba-497a-b046-7693d7c13a71", 00:09:08.356 "aliases": [ 00:09:08.356 "lvs/lvol" 00:09:08.356 ], 00:09:08.356 "product_name": "Logical Volume", 00:09:08.356 "block_size": 4096, 00:09:08.356 "num_blocks": 38912, 00:09:08.356 "uuid": "d423af81-07ba-497a-b046-7693d7c13a71", 00:09:08.356 "assigned_rate_limits": { 00:09:08.356 "rw_ios_per_sec": 0, 00:09:08.356 "rw_mbytes_per_sec": 0, 00:09:08.356 "r_mbytes_per_sec": 0, 00:09:08.356 "w_mbytes_per_sec": 0 00:09:08.356 }, 00:09:08.356 "claimed": false, 00:09:08.356 "zoned": false, 00:09:08.356 "supported_io_types": { 00:09:08.356 "read": true, 00:09:08.356 "write": true, 00:09:08.356 "unmap": true, 00:09:08.356 "flush": false, 00:09:08.356 "reset": true, 00:09:08.356 "nvme_admin": false, 00:09:08.356 "nvme_io": false, 00:09:08.356 "nvme_io_md": false, 00:09:08.356 "write_zeroes": true, 00:09:08.356 "zcopy": false, 00:09:08.356 "get_zone_info": false, 00:09:08.356 "zone_management": false, 00:09:08.356 "zone_append": false, 00:09:08.356 "compare": false, 00:09:08.356 "compare_and_write": false, 00:09:08.356 "abort": false, 00:09:08.356 "seek_hole": true, 00:09:08.356 "seek_data": true, 00:09:08.356 "copy": false, 00:09:08.356 "nvme_iov_md": false 00:09:08.356 }, 00:09:08.356 "driver_specific": { 00:09:08.356 "lvol": { 00:09:08.356 "lvol_store_uuid": "a58bef46-9f4b-4fce-a24b-35d13fc59112", 00:09:08.356 "base_bdev": "aio_bdev", 00:09:08.356 "thin_provision": false, 00:09:08.356 "num_allocated_clusters": 38, 00:09:08.356 "snapshot": false, 00:09:08.356 "clone": false, 00:09:08.356 "esnap_clone": false 00:09:08.356 } 00:09:08.356 } 00:09:08.356 } 00:09:08.356 ] 00:09:08.356 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:08.357 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:08.357 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:08.615 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:08.615 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:08.615 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:08.874 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.874 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d423af81-07ba-497a-b046-7693d7c13a71 00:09:09.132 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a58bef46-9f4b-4fce-a24b-35d13fc59112 00:09:09.699 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.699 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.699 00:09:09.699 real 0m17.559s 00:09:09.699 user 0m17.071s 00:09:09.699 sys 0m1.879s 00:09:09.699 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.699 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 ************************************ 00:09:09.699 END TEST lvs_grow_clean 00:09:09.699 ************************************ 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.957 ************************************ 00:09:09.957 START TEST lvs_grow_dirty 00:09:09.957 ************************************ 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.957 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.217 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:10.217 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.475 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:10.475 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:10.475 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.733 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.733 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.733 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 716ad387-7e47-4d13-8d08-2fb34ae40857 lvol 150 00:09:10.991 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1813c4be-074e-489a-957a-8fd63468709f 00:09:10.991 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.991 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:11.250 [2024-10-14 13:20:02.953513] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:11.250 [2024-10-14 13:20:02.953624] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:11.250 true 00:09:11.250 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:11.250 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:11.507 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.507 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.764 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1813c4be-074e-489a-957a-8fd63468709f 00:09:12.023 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.282 [2024-10-14 13:20:04.028764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.282 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134852 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134852 /var/tmp/bdevperf.sock 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 134852 ']' 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.541 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.541 [2024-10-14 13:20:04.354950] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:12.541 [2024-10-14 13:20:04.355031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134852 ] 00:09:12.801 [2024-10-14 13:20:04.413531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.801 [2024-10-14 13:20:04.458463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.801 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.801 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:12.801 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:13.368 Nvme0n1 00:09:13.368 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:13.626 [ 00:09:13.626 { 00:09:13.626 "name": "Nvme0n1", 00:09:13.626 "aliases": [ 00:09:13.626 "1813c4be-074e-489a-957a-8fd63468709f" 00:09:13.626 ], 00:09:13.626 "product_name": "NVMe disk", 00:09:13.626 "block_size": 4096, 00:09:13.626 "num_blocks": 38912, 00:09:13.626 "uuid": "1813c4be-074e-489a-957a-8fd63468709f", 00:09:13.626 "numa_id": 0, 00:09:13.626 "assigned_rate_limits": { 00:09:13.626 "rw_ios_per_sec": 0, 00:09:13.626 "rw_mbytes_per_sec": 0, 00:09:13.626 "r_mbytes_per_sec": 0, 00:09:13.626 "w_mbytes_per_sec": 0 00:09:13.626 }, 00:09:13.626 "claimed": false, 00:09:13.626 "zoned": false, 00:09:13.626 "supported_io_types": { 00:09:13.626 "read": true, 00:09:13.626 "write": true, 00:09:13.626 "unmap": true, 00:09:13.626 "flush": true, 00:09:13.626 "reset": true, 00:09:13.626 "nvme_admin": true, 00:09:13.626 "nvme_io": true, 00:09:13.626 "nvme_io_md": false, 00:09:13.626 "write_zeroes": true, 00:09:13.626 "zcopy": false, 00:09:13.626 "get_zone_info": false, 00:09:13.626 "zone_management": false, 00:09:13.626 "zone_append": false, 00:09:13.626 "compare": true, 00:09:13.626 "compare_and_write": true, 00:09:13.626 "abort": true, 00:09:13.626 "seek_hole": false, 00:09:13.626 "seek_data": false, 00:09:13.626 "copy": true, 00:09:13.626 "nvme_iov_md": false 00:09:13.626 }, 00:09:13.626 "memory_domains": [ 00:09:13.626 { 00:09:13.626 "dma_device_id": "system", 00:09:13.626 "dma_device_type": 1 00:09:13.626 } 00:09:13.626 ], 00:09:13.626 "driver_specific": { 00:09:13.626 "nvme": [ 00:09:13.626 { 00:09:13.626 "trid": { 00:09:13.626 "trtype": "TCP", 00:09:13.626 "adrfam": "IPv4", 00:09:13.626 "traddr": "10.0.0.2", 00:09:13.626 "trsvcid": "4420", 00:09:13.626 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:13.626 }, 00:09:13.626 "ctrlr_data": { 00:09:13.626 "cntlid": 1, 00:09:13.626 "vendor_id": "0x8086", 00:09:13.626 "model_number": "SPDK bdev Controller", 00:09:13.626 "serial_number": "SPDK0", 00:09:13.626 "firmware_revision": "25.01", 00:09:13.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.626 "oacs": { 00:09:13.626 "security": 0, 00:09:13.626 "format": 0, 00:09:13.626 "firmware": 0, 00:09:13.626 "ns_manage": 0 00:09:13.626 }, 00:09:13.626 "multi_ctrlr": true, 00:09:13.626 "ana_reporting": false 00:09:13.626 }, 00:09:13.626 "vs": { 00:09:13.626 "nvme_version": "1.3" 00:09:13.626 }, 00:09:13.626 "ns_data": { 00:09:13.626 "id": 1, 00:09:13.626 "can_share": true 00:09:13.626 } 00:09:13.626 } 00:09:13.626 ], 00:09:13.626 "mp_policy": "active_passive" 00:09:13.626 } 00:09:13.626 } 00:09:13.626 ] 00:09:13.626 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134988 00:09:13.626 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.626 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.626 Running I/O for 10 seconds... 00:09:14.562 Latency(us) 00:09:14.562 [2024-10-14T11:20:06.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.562 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:09:14.562 [2024-10-14T11:20:06.415Z] =================================================================================================================== 00:09:14.562 [2024-10-14T11:20:06.415Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:09:14.562 00:09:15.495 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:15.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.753 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:09:15.753 [2024-10-14T11:20:07.606Z] =================================================================================================================== 00:09:15.753 [2024-10-14T11:20:07.606Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:09:15.753 00:09:15.753 true 00:09:15.753 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:15.753 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:16.318 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:16.318 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:16.318 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134988 00:09:16.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.576 Nvme0n1 : 3.00 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:09:16.576 [2024-10-14T11:20:08.429Z] =================================================================================================================== 00:09:16.576 [2024-10-14T11:20:08.429Z] Total : 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:09:16.576 00:09:17.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.950 Nvme0n1 : 4.00 15526.00 60.65 0.00 0.00 0.00 0.00 0.00 00:09:17.950 [2024-10-14T11:20:09.803Z] =================================================================================================================== 00:09:17.950 [2024-10-14T11:20:09.803Z] Total : 15526.00 60.65 0.00 0.00 0.00 0.00 0.00 00:09:17.950 00:09:18.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.884 Nvme0n1 : 5.00 15595.80 60.92 0.00 0.00 0.00 0.00 0.00 00:09:18.884 [2024-10-14T11:20:10.737Z] =================================================================================================================== 00:09:18.884 [2024-10-14T11:20:10.737Z] Total : 15595.80 60.92 0.00 0.00 0.00 0.00 0.00 00:09:18.884 00:09:19.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.816 Nvme0n1 : 6.00 15642.33 61.10 0.00 0.00 0.00 0.00 0.00 00:09:19.816 [2024-10-14T11:20:11.669Z] =================================================================================================================== 00:09:19.816 [2024-10-14T11:20:11.669Z] Total : 15642.33 61.10 0.00 0.00 0.00 0.00 0.00 00:09:19.816 00:09:20.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.749 Nvme0n1 : 7.00 15711.86 61.37 0.00 0.00 0.00 0.00 0.00 00:09:20.749 [2024-10-14T11:20:12.602Z] =================================================================================================================== 00:09:20.749 [2024-10-14T11:20:12.602Z] Total : 15711.86 61.37 0.00 0.00 0.00 0.00 0.00 00:09:20.749 00:09:21.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.681 Nvme0n1 : 8.00 15748.38 61.52 0.00 0.00 0.00 0.00 0.00 00:09:21.681 [2024-10-14T11:20:13.534Z] =================================================================================================================== 00:09:21.681 [2024-10-14T11:20:13.534Z] Total : 15748.38 61.52 0.00 0.00 0.00 0.00 0.00 00:09:21.681 00:09:22.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.616 Nvme0n1 : 9.00 15776.78 61.63 0.00 0.00 0.00 0.00 0.00 00:09:22.616 [2024-10-14T11:20:14.469Z] =================================================================================================================== 00:09:22.616 [2024-10-14T11:20:14.469Z] Total : 15776.78 61.63 0.00 0.00 0.00 0.00 0.00 00:09:22.616 00:09:23.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.991 Nvme0n1 : 10.00 15799.30 61.72 0.00 0.00 0.00 0.00 0.00 00:09:23.991 [2024-10-14T11:20:15.844Z] =================================================================================================================== 00:09:23.991 [2024-10-14T11:20:15.844Z] Total : 15799.30 61.72 0.00 0.00 0.00 0.00 0.00 00:09:23.991 00:09:23.991 00:09:23.991 Latency(us) 00:09:23.991 [2024-10-14T11:20:15.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.992 Nvme0n1 : 10.01 15803.03 61.73 0.00 0.00 8095.23 3131.16 15631.55 00:09:23.992 [2024-10-14T11:20:15.845Z] =================================================================================================================== 00:09:23.992 [2024-10-14T11:20:15.845Z] Total : 15803.03 61.73 0.00 0.00 8095.23 3131.16 15631.55 00:09:23.992 { 00:09:23.992 "results": [ 00:09:23.992 { 00:09:23.992 "job": "Nvme0n1", 00:09:23.992 "core_mask": "0x2", 00:09:23.992 "workload": "randwrite", 00:09:23.992 "status": "finished", 00:09:23.992 "queue_depth": 128, 00:09:23.992 "io_size": 4096, 00:09:23.992 "runtime": 10.005741, 00:09:23.992 "iops": 15803.027481922629, 00:09:23.992 "mibps": 61.73057610126027, 00:09:23.992 "io_failed": 0, 00:09:23.992 "io_timeout": 0, 00:09:23.992 "avg_latency_us": 8095.225622646699, 00:09:23.992 "min_latency_us": 3131.1644444444446, 00:09:23.992 "max_latency_us": 15631.54962962963 00:09:23.992 } 00:09:23.992 ], 00:09:23.992 "core_count": 1 00:09:23.992 } 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134852 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 134852 ']' 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 134852 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134852 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134852' 00:09:23.992 killing process with pid 134852 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 134852 00:09:23.992 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.992 00:09:23.992 Latency(us) 00:09:23.992 [2024-10-14T11:20:15.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.992 [2024-10-14T11:20:15.845Z] =================================================================================================================== 00:09:23.992 [2024-10-14T11:20:15.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 134852 00:09:23.992 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.250 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.508 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:24.508 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 132227 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 132227 00:09:24.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 132227 Killed "${NVMF_APP[@]}" "$@" 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=136324 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 136324 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 136324 ']' 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.768 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.768 [2024-10-14 13:20:16.582199] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:24.768 [2024-10-14 13:20:16.582287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.027 [2024-10-14 13:20:16.648753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.027 [2024-10-14 13:20:16.696991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.027 [2024-10-14 13:20:16.697045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.027 [2024-10-14 13:20:16.697073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.027 [2024-10-14 13:20:16.697084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.027 [2024-10-14 13:20:16.697093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.027 [2024-10-14 13:20:16.697722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.027 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.286 [2024-10-14 13:20:17.079639] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:25.286 [2024-10-14 13:20:17.079776] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:25.286 [2024-10-14 13:20:17.079822] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1813c4be-074e-489a-957a-8fd63468709f 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1813c4be-074e-489a-957a-8fd63468709f 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.286 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.544 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1813c4be-074e-489a-957a-8fd63468709f -t 2000 00:09:25.803 [ 00:09:25.803 { 00:09:25.803 "name": "1813c4be-074e-489a-957a-8fd63468709f", 00:09:25.803 "aliases": [ 00:09:25.803 "lvs/lvol" 00:09:25.803 ], 00:09:25.803 "product_name": "Logical Volume", 00:09:25.803 "block_size": 4096, 00:09:25.803 "num_blocks": 38912, 00:09:25.803 "uuid": "1813c4be-074e-489a-957a-8fd63468709f", 00:09:25.803 "assigned_rate_limits": { 00:09:25.803 "rw_ios_per_sec": 0, 00:09:25.803 "rw_mbytes_per_sec": 0, 00:09:25.803 "r_mbytes_per_sec": 0, 00:09:25.803 "w_mbytes_per_sec": 0 00:09:25.803 }, 00:09:25.803 "claimed": false, 00:09:25.803 "zoned": false, 00:09:25.803 "supported_io_types": { 00:09:25.803 "read": true, 00:09:25.803 "write": true, 00:09:25.803 "unmap": true, 00:09:25.803 "flush": false, 00:09:25.803 "reset": true, 00:09:25.803 "nvme_admin": false, 00:09:25.803 "nvme_io": false, 00:09:25.803 "nvme_io_md": false, 00:09:25.803 "write_zeroes": true, 00:09:25.803 "zcopy": false, 00:09:25.803 "get_zone_info": false, 00:09:25.803 "zone_management": false, 00:09:25.803 "zone_append": false, 00:09:25.803 "compare": false, 00:09:25.803 "compare_and_write": false, 00:09:25.803 "abort": false, 00:09:25.803 "seek_hole": true, 00:09:25.803 "seek_data": true, 00:09:25.803 "copy": false, 00:09:25.803 "nvme_iov_md": false 00:09:25.803 }, 00:09:25.803 "driver_specific": { 00:09:25.803 "lvol": { 00:09:25.803 "lvol_store_uuid": "716ad387-7e47-4d13-8d08-2fb34ae40857", 00:09:25.803 "base_bdev": "aio_bdev", 00:09:25.803 "thin_provision": false, 00:09:25.803 "num_allocated_clusters": 38, 00:09:25.803 "snapshot": false, 00:09:25.803 "clone": false, 00:09:25.803 "esnap_clone": false 00:09:25.803 } 00:09:25.803 } 00:09:25.803 } 00:09:25.803 ] 00:09:25.803 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:25.803 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:25.803 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:26.061 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:26.061 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:26.061 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.631 [2024-10-14 13:20:18.433521] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.631 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:26.889 request: 00:09:26.889 { 00:09:26.889 "uuid": "716ad387-7e47-4d13-8d08-2fb34ae40857", 00:09:26.889 "method": "bdev_lvol_get_lvstores", 00:09:26.889 "req_id": 1 00:09:26.889 } 00:09:26.889 Got JSON-RPC error response 00:09:26.889 response: 00:09:26.889 { 00:09:26.889 "code": -19, 00:09:26.889 "message": "No such device" 00:09:26.889 } 00:09:26.889 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:26.889 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.889 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:26.889 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.889 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.148 aio_bdev 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1813c4be-074e-489a-957a-8fd63468709f 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1813c4be-074e-489a-957a-8fd63468709f 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.148 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.716 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1813c4be-074e-489a-957a-8fd63468709f -t 2000 00:09:27.716 [ 00:09:27.716 { 00:09:27.716 "name": "1813c4be-074e-489a-957a-8fd63468709f", 00:09:27.716 "aliases": [ 00:09:27.716 "lvs/lvol" 00:09:27.716 ], 00:09:27.716 "product_name": "Logical Volume", 00:09:27.716 "block_size": 4096, 00:09:27.716 "num_blocks": 38912, 00:09:27.716 "uuid": "1813c4be-074e-489a-957a-8fd63468709f", 00:09:27.716 "assigned_rate_limits": { 00:09:27.716 "rw_ios_per_sec": 0, 00:09:27.716 "rw_mbytes_per_sec": 0, 00:09:27.716 "r_mbytes_per_sec": 0, 00:09:27.716 "w_mbytes_per_sec": 0 00:09:27.716 }, 00:09:27.716 "claimed": false, 00:09:27.716 "zoned": false, 00:09:27.716 "supported_io_types": { 00:09:27.716 "read": true, 00:09:27.716 "write": true, 00:09:27.716 "unmap": true, 00:09:27.716 "flush": false, 00:09:27.716 "reset": true, 00:09:27.716 "nvme_admin": false, 00:09:27.716 "nvme_io": false, 00:09:27.716 "nvme_io_md": false, 00:09:27.716 "write_zeroes": true, 00:09:27.716 "zcopy": false, 00:09:27.716 "get_zone_info": false, 00:09:27.716 "zone_management": false, 00:09:27.716 "zone_append": false, 00:09:27.716 "compare": false, 00:09:27.716 "compare_and_write": false, 00:09:27.716 "abort": false, 00:09:27.716 "seek_hole": true, 00:09:27.716 "seek_data": true, 00:09:27.716 "copy": false, 00:09:27.716 "nvme_iov_md": false 00:09:27.716 }, 00:09:27.716 "driver_specific": { 00:09:27.716 "lvol": { 00:09:27.716 "lvol_store_uuid": "716ad387-7e47-4d13-8d08-2fb34ae40857", 00:09:27.716 "base_bdev": "aio_bdev", 00:09:27.716 "thin_provision": false, 00:09:27.716 "num_allocated_clusters": 38, 00:09:27.716 "snapshot": false, 00:09:27.716 "clone": false, 00:09:27.716 "esnap_clone": false 00:09:27.716 } 00:09:27.716 } 00:09:27.716 } 00:09:27.716 ] 00:09:27.716 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:27.716 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:27.716 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.975 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.975 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:27.975 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.233 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.233 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1813c4be-074e-489a-957a-8fd63468709f 00:09:28.800 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 716ad387-7e47-4d13-8d08-2fb34ae40857 00:09:28.800 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.367 00:09:29.367 real 0m19.359s 00:09:29.367 user 0m49.074s 00:09:29.367 sys 0m4.468s 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.367 ************************************ 00:09:29.367 END TEST lvs_grow_dirty 00:09:29.367 ************************************ 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:29.367 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:29.368 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:29.368 nvmf_trace.0 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.368 rmmod nvme_tcp 00:09:29.368 rmmod nvme_fabrics 00:09:29.368 rmmod nvme_keyring 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 136324 ']' 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 136324 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 136324 ']' 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 136324 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136324 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136324' 00:09:29.368 killing process with pid 136324 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 136324 00:09:29.368 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 136324 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.626 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.536 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.536 00:09:31.536 real 0m42.497s 00:09:31.536 user 1m12.221s 00:09:31.536 sys 0m8.385s 00:09:31.536 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.536 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.536 ************************************ 00:09:31.536 END TEST nvmf_lvs_grow 00:09:31.536 ************************************ 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 ************************************ 00:09:31.797 START TEST nvmf_bdev_io_wait 00:09:31.797 ************************************ 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.797 * Looking for test storage... 00:09:31.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.797 --rc genhtml_branch_coverage=1 00:09:31.797 --rc genhtml_function_coverage=1 00:09:31.797 --rc genhtml_legend=1 00:09:31.797 --rc geninfo_all_blocks=1 00:09:31.797 --rc geninfo_unexecuted_blocks=1 00:09:31.797 00:09:31.797 ' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.797 --rc genhtml_branch_coverage=1 00:09:31.797 --rc genhtml_function_coverage=1 00:09:31.797 --rc genhtml_legend=1 00:09:31.797 --rc geninfo_all_blocks=1 00:09:31.797 --rc geninfo_unexecuted_blocks=1 00:09:31.797 00:09:31.797 ' 00:09:31.797 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.798 --rc genhtml_branch_coverage=1 00:09:31.798 --rc genhtml_function_coverage=1 00:09:31.798 --rc genhtml_legend=1 00:09:31.798 --rc geninfo_all_blocks=1 00:09:31.798 --rc geninfo_unexecuted_blocks=1 00:09:31.798 00:09:31.798 ' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.798 --rc genhtml_branch_coverage=1 00:09:31.798 --rc genhtml_function_coverage=1 00:09:31.798 --rc genhtml_legend=1 00:09:31.798 --rc geninfo_all_blocks=1 00:09:31.798 --rc geninfo_unexecuted_blocks=1 00:09:31.798 00:09:31.798 ' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.798 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.336 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:34.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:34.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:34.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:34.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:09:34.337 00:09:34.337 --- 10.0.0.2 ping statistics --- 00:09:34.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.337 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:09:34.337 00:09:34.337 --- 10.0.0.1 ping statistics --- 00:09:34.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.337 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=138862 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 138862 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 138862 ']' 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.337 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.337 [2024-10-14 13:20:26.019550] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:34.337 [2024-10-14 13:20:26.019635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.337 [2024-10-14 13:20:26.083617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.337 [2024-10-14 13:20:26.128692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.337 [2024-10-14 13:20:26.128749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.337 [2024-10-14 13:20:26.128776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.337 [2024-10-14 13:20:26.128787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.337 [2024-10-14 13:20:26.128796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.337 [2024-10-14 13:20:26.130333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.337 [2024-10-14 13:20:26.130383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.337 [2024-10-14 13:20:26.130438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.337 [2024-10-14 13:20:26.130440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.596 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 [2024-10-14 13:20:26.331890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 Malloc0 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 [2024-10-14 13:20:26.382608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138888 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138890 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138892 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138895 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138888 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:34.597 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.598 "method": "bdev_nvme_attach_controller" 00:09:34.598 }' 00:09:34.598 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:34.598 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:34.598 "params": { 00:09:34.598 "name": "Nvme1", 00:09:34.598 "trtype": "tcp", 00:09:34.598 "traddr": "10.0.0.2", 00:09:34.598 "adrfam": "ipv4", 00:09:34.598 "trsvcid": "4420", 00:09:34.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.598 "hdgst": false, 00:09:34.598 "ddgst": false 00:09:34.598 }, 00:09:34.598 "method": "bdev_nvme_attach_controller" 00:09:34.598 }' 00:09:34.598 [2024-10-14 13:20:26.431601] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:34.598 [2024-10-14 13:20:26.431598] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:34.598 [2024-10-14 13:20:26.431601] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:34.598 [2024-10-14 13:20:26.431673] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 13:20:26.431673] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 13:20:26.431673] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.598 --proc-type=auto ] 00:09:34.598 --proc-type=auto ] 00:09:34.598 [2024-10-14 13:20:26.434526] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:34.598 [2024-10-14 13:20:26.434608] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.856 [2024-10-14 13:20:26.609731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.856 [2024-10-14 13:20:26.652959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.115 [2024-10-14 13:20:26.713176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.115 [2024-10-14 13:20:26.755367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:35.115 [2024-10-14 13:20:26.812307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.115 [2024-10-14 13:20:26.856607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:35.115 [2024-10-14 13:20:26.888345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.115 [2024-10-14 13:20:26.927251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:35.374 Running I/O for 1 seconds... 00:09:35.374 Running I/O for 1 seconds... 00:09:35.374 Running I/O for 1 seconds... 00:09:35.374 Running I/O for 1 seconds... 00:09:36.314 200104.00 IOPS, 781.66 MiB/s 00:09:36.314 Latency(us) 00:09:36.314 [2024-10-14T11:20:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.314 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.314 Nvme1n1 : 1.00 199734.50 780.21 0.00 0.00 637.56 283.69 1844.72 00:09:36.314 [2024-10-14T11:20:28.167Z] =================================================================================================================== 00:09:36.314 [2024-10-14T11:20:28.167Z] Total : 199734.50 780.21 0.00 0.00 637.56 283.69 1844.72 00:09:36.314 6716.00 IOPS, 26.23 MiB/s 00:09:36.314 Latency(us) 00:09:36.314 [2024-10-14T11:20:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.314 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:36.314 Nvme1n1 : 1.02 6691.40 26.14 0.00 0.00 18930.50 8204.14 27573.67 00:09:36.314 [2024-10-14T11:20:28.167Z] =================================================================================================================== 00:09:36.314 [2024-10-14T11:20:28.167Z] Total : 6691.40 26.14 0.00 0.00 18930.50 8204.14 27573.67 00:09:36.314 7561.00 IOPS, 29.54 MiB/s [2024-10-14T11:20:28.167Z] 6294.00 IOPS, 24.59 MiB/s 00:09:36.314 Latency(us) 00:09:36.314 [2024-10-14T11:20:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.314 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.314 Nvme1n1 : 1.01 6374.20 24.90 0.00 0.00 19998.92 6553.60 41554.68 00:09:36.314 [2024-10-14T11:20:28.167Z] =================================================================================================================== 00:09:36.314 [2024-10-14T11:20:28.167Z] Total : 6374.20 24.90 0.00 0.00 19998.92 6553.60 41554.68 00:09:36.314 00:09:36.314 Latency(us) 00:09:36.314 [2024-10-14T11:20:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.314 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.314 Nvme1n1 : 1.01 7625.23 29.79 0.00 0.00 16701.79 7621.59 29321.29 00:09:36.314 [2024-10-14T11:20:28.167Z] =================================================================================================================== 00:09:36.314 [2024-10-14T11:20:28.167Z] Total : 7625.23 29.79 0.00 0.00 16701.79 7621.59 29321.29 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138890 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138892 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138895 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.573 rmmod nvme_tcp 00:09:36.573 rmmod nvme_fabrics 00:09:36.573 rmmod nvme_keyring 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 138862 ']' 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 138862 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 138862 ']' 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 138862 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.573 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138862 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138862' 00:09:36.832 killing process with pid 138862 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 138862 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 138862 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.832 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.377 00:09:39.377 real 0m7.248s 00:09:39.377 user 0m15.479s 00:09:39.377 sys 0m3.564s 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.377 ************************************ 00:09:39.377 END TEST nvmf_bdev_io_wait 00:09:39.377 ************************************ 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.377 ************************************ 00:09:39.377 START TEST nvmf_queue_depth 00:09:39.377 ************************************ 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:39.377 * Looking for test storage... 00:09:39.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.377 --rc genhtml_branch_coverage=1 00:09:39.377 --rc genhtml_function_coverage=1 00:09:39.377 --rc genhtml_legend=1 00:09:39.377 --rc geninfo_all_blocks=1 00:09:39.377 --rc geninfo_unexecuted_blocks=1 00:09:39.377 00:09:39.377 ' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.377 --rc genhtml_branch_coverage=1 00:09:39.377 --rc genhtml_function_coverage=1 00:09:39.377 --rc genhtml_legend=1 00:09:39.377 --rc geninfo_all_blocks=1 00:09:39.377 --rc geninfo_unexecuted_blocks=1 00:09:39.377 00:09:39.377 ' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.377 --rc genhtml_branch_coverage=1 00:09:39.377 --rc genhtml_function_coverage=1 00:09:39.377 --rc genhtml_legend=1 00:09:39.377 --rc geninfo_all_blocks=1 00:09:39.377 --rc geninfo_unexecuted_blocks=1 00:09:39.377 00:09:39.377 ' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.377 --rc genhtml_branch_coverage=1 00:09:39.377 --rc genhtml_function_coverage=1 00:09:39.377 --rc genhtml_legend=1 00:09:39.377 --rc geninfo_all_blocks=1 00:09:39.377 --rc geninfo_unexecuted_blocks=1 00:09:39.377 00:09:39.377 ' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.377 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.378 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.283 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:41.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:41.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:41.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:41.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.284 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:41.554 00:09:41.554 --- 10.0.0.2 ping statistics --- 00:09:41.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.554 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:09:41.554 00:09:41.554 --- 10.0.0.1 ping statistics --- 00:09:41.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.554 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=141122 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 141122 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 141122 ']' 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.554 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.554 [2024-10-14 13:20:33.267835] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:41.555 [2024-10-14 13:20:33.267944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.555 [2024-10-14 13:20:33.338653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.555 [2024-10-14 13:20:33.383259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.555 [2024-10-14 13:20:33.383324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.555 [2024-10-14 13:20:33.383353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.555 [2024-10-14 13:20:33.383364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.555 [2024-10-14 13:20:33.383373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.555 [2024-10-14 13:20:33.383992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 [2024-10-14 13:20:33.532314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 Malloc0 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 [2024-10-14 13:20:33.578784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=141271 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 141271 /var/tmp/bdevperf.sock 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 141271 ']' 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.831 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.831 [2024-10-14 13:20:33.625702] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:09:41.831 [2024-10-14 13:20:33.625763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141271 ] 00:09:42.123 [2024-10-14 13:20:33.684816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.123 [2024-10-14 13:20:33.730383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.123 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.123 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:42.123 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:42.123 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.123 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.405 NVMe0n1 00:09:42.405 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.405 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.405 Running I/O for 10 seconds... 00:09:44.445 8192.00 IOPS, 32.00 MiB/s [2024-10-14T11:20:37.328Z] 8345.50 IOPS, 32.60 MiB/s [2024-10-14T11:20:38.345Z] 8519.00 IOPS, 33.28 MiB/s [2024-10-14T11:20:39.348Z] 8444.50 IOPS, 32.99 MiB/s [2024-10-14T11:20:40.341Z] 8536.80 IOPS, 33.35 MiB/s [2024-10-14T11:20:41.340Z] 8526.33 IOPS, 33.31 MiB/s [2024-10-14T11:20:42.329Z] 8579.00 IOPS, 33.51 MiB/s [2024-10-14T11:20:43.326Z] 8564.38 IOPS, 33.45 MiB/s [2024-10-14T11:20:44.269Z] 8611.78 IOPS, 33.64 MiB/s [2024-10-14T11:20:44.528Z] 8597.00 IOPS, 33.58 MiB/s 00:09:52.675 Latency(us) 00:09:52.675 [2024-10-14T11:20:44.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.675 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.675 Verification LBA range: start 0x0 length 0x4000 00:09:52.675 NVMe0n1 : 10.06 8642.89 33.76 0.00 0.00 118004.47 9077.95 74565.40 00:09:52.675 [2024-10-14T11:20:44.528Z] =================================================================================================================== 00:09:52.675 [2024-10-14T11:20:44.528Z] Total : 8642.89 33.76 0.00 0.00 118004.47 9077.95 74565.40 00:09:52.675 { 00:09:52.675 "results": [ 00:09:52.675 { 00:09:52.675 "job": "NVMe0n1", 00:09:52.675 "core_mask": "0x1", 00:09:52.675 "workload": "verify", 00:09:52.675 "status": "finished", 00:09:52.675 "verify_range": { 00:09:52.675 "start": 0, 00:09:52.675 "length": 16384 00:09:52.675 }, 00:09:52.675 "queue_depth": 1024, 00:09:52.675 "io_size": 4096, 00:09:52.675 "runtime": 10.061454, 00:09:52.675 "iops": 8642.886008324444, 00:09:52.675 "mibps": 33.76127347001736, 00:09:52.675 "io_failed": 0, 00:09:52.675 "io_timeout": 0, 00:09:52.675 "avg_latency_us": 118004.47435646871, 00:09:52.675 "min_latency_us": 9077.94962962963, 00:09:52.675 "max_latency_us": 74565.40444444444 00:09:52.675 } 00:09:52.675 ], 00:09:52.675 "core_count": 1 00:09:52.675 } 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 141271 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 141271 ']' 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 141271 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141271 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141271' 00:09:52.675 killing process with pid 141271 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 141271 00:09:52.675 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.675 00:09:52.675 Latency(us) 00:09:52.675 [2024-10-14T11:20:44.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.675 [2024-10-14T11:20:44.528Z] =================================================================================================================== 00:09:52.675 [2024-10-14T11:20:44.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.675 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 141271 00:09:52.933 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.933 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.933 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:52.933 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:52.933 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.934 rmmod nvme_tcp 00:09:52.934 rmmod nvme_fabrics 00:09:52.934 rmmod nvme_keyring 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 141122 ']' 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 141122 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 141122 ']' 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 141122 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141122 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141122' 00:09:52.934 killing process with pid 141122 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 141122 00:09:52.934 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 141122 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.195 13:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.105 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.105 00:09:55.105 real 0m16.175s 00:09:55.105 user 0m22.596s 00:09:55.105 sys 0m3.145s 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.106 ************************************ 00:09:55.106 END TEST nvmf_queue_depth 00:09:55.106 ************************************ 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.106 ************************************ 00:09:55.106 START TEST nvmf_target_multipath 00:09:55.106 ************************************ 00:09:55.106 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.366 * Looking for test storage... 00:09:55.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.366 --rc genhtml_branch_coverage=1 00:09:55.366 --rc genhtml_function_coverage=1 00:09:55.366 --rc genhtml_legend=1 00:09:55.366 --rc geninfo_all_blocks=1 00:09:55.366 --rc geninfo_unexecuted_blocks=1 00:09:55.366 00:09:55.366 ' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.366 --rc genhtml_branch_coverage=1 00:09:55.366 --rc genhtml_function_coverage=1 00:09:55.366 --rc genhtml_legend=1 00:09:55.366 --rc geninfo_all_blocks=1 00:09:55.366 --rc geninfo_unexecuted_blocks=1 00:09:55.366 00:09:55.366 ' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.366 --rc genhtml_branch_coverage=1 00:09:55.366 --rc genhtml_function_coverage=1 00:09:55.366 --rc genhtml_legend=1 00:09:55.366 --rc geninfo_all_blocks=1 00:09:55.366 --rc geninfo_unexecuted_blocks=1 00:09:55.366 00:09:55.366 ' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.366 --rc genhtml_branch_coverage=1 00:09:55.366 --rc genhtml_function_coverage=1 00:09:55.366 --rc genhtml_legend=1 00:09:55.366 --rc geninfo_all_blocks=1 00:09:55.366 --rc geninfo_unexecuted_blocks=1 00:09:55.366 00:09:55.366 ' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.366 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.367 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:57.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:57.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:57.904 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:57.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:57.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:09:57.905 00:09:57.905 --- 10.0.0.2 ping statistics --- 00:09:57.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.905 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:57.905 00:09:57.905 --- 10.0.0.1 ping statistics --- 00:09:57.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.905 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.905 only one NIC for nvmf test 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.905 rmmod nvme_tcp 00:09:57.905 rmmod nvme_fabrics 00:09:57.905 rmmod nvme_keyring 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.905 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.818 00:09:59.818 real 0m4.688s 00:09:59.818 user 0m0.975s 00:09:59.818 sys 0m1.671s 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.818 ************************************ 00:09:59.818 END TEST nvmf_target_multipath 00:09:59.818 ************************************ 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.818 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.078 ************************************ 00:10:00.078 START TEST nvmf_zcopy 00:10:00.078 ************************************ 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:00.078 * Looking for test storage... 00:10:00.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.078 --rc genhtml_branch_coverage=1 00:10:00.078 --rc genhtml_function_coverage=1 00:10:00.078 --rc genhtml_legend=1 00:10:00.078 --rc geninfo_all_blocks=1 00:10:00.078 --rc geninfo_unexecuted_blocks=1 00:10:00.078 00:10:00.078 ' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.078 --rc genhtml_branch_coverage=1 00:10:00.078 --rc genhtml_function_coverage=1 00:10:00.078 --rc genhtml_legend=1 00:10:00.078 --rc geninfo_all_blocks=1 00:10:00.078 --rc geninfo_unexecuted_blocks=1 00:10:00.078 00:10:00.078 ' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.078 --rc genhtml_branch_coverage=1 00:10:00.078 --rc genhtml_function_coverage=1 00:10:00.078 --rc genhtml_legend=1 00:10:00.078 --rc geninfo_all_blocks=1 00:10:00.078 --rc geninfo_unexecuted_blocks=1 00:10:00.078 00:10:00.078 ' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.078 --rc genhtml_branch_coverage=1 00:10:00.078 --rc genhtml_function_coverage=1 00:10:00.078 --rc genhtml_legend=1 00:10:00.078 --rc geninfo_all_blocks=1 00:10:00.078 --rc geninfo_unexecuted_blocks=1 00:10:00.078 00:10:00.078 ' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.078 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.079 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.614 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:02.615 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:02.615 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:02.615 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:02.615 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:10:02.615 00:10:02.615 --- 10.0.0.2 ping statistics --- 00:10:02.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.615 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:10:02.615 00:10:02.615 --- 10.0.0.1 ping statistics --- 00:10:02.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.615 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=146535 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 146535 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 146535 ']' 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.615 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.615 [2024-10-14 13:20:54.236846] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:10:02.615 [2024-10-14 13:20:54.236932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.615 [2024-10-14 13:20:54.300970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.615 [2024-10-14 13:20:54.344492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.615 [2024-10-14 13:20:54.344541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.616 [2024-10-14 13:20:54.344570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.616 [2024-10-14 13:20:54.344581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.616 [2024-10-14 13:20:54.344591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.616 [2024-10-14 13:20:54.345141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.616 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.616 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:02.616 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:02.616 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.616 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 [2024-10-14 13:20:54.487527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 [2024-10-14 13:20:54.503754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 malloc0 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:02.875 { 00:10:02.875 "params": { 00:10:02.875 "name": "Nvme$subsystem", 00:10:02.875 "trtype": "$TEST_TRANSPORT", 00:10:02.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.875 "adrfam": "ipv4", 00:10:02.875 "trsvcid": "$NVMF_PORT", 00:10:02.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.875 "hdgst": ${hdgst:-false}, 00:10:02.875 "ddgst": ${ddgst:-false} 00:10:02.875 }, 00:10:02.875 "method": "bdev_nvme_attach_controller" 00:10:02.875 } 00:10:02.875 EOF 00:10:02.875 )") 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:02.875 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:02.875 "params": { 00:10:02.875 "name": "Nvme1", 00:10:02.875 "trtype": "tcp", 00:10:02.875 "traddr": "10.0.0.2", 00:10:02.875 "adrfam": "ipv4", 00:10:02.875 "trsvcid": "4420", 00:10:02.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.875 "hdgst": false, 00:10:02.875 "ddgst": false 00:10:02.875 }, 00:10:02.875 "method": "bdev_nvme_attach_controller" 00:10:02.875 }' 00:10:02.875 [2024-10-14 13:20:54.590453] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:10:02.875 [2024-10-14 13:20:54.590533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146559 ] 00:10:02.875 [2024-10-14 13:20:54.655492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.875 [2024-10-14 13:20:54.703015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.441 Running I/O for 10 seconds... 00:10:05.310 5738.00 IOPS, 44.83 MiB/s [2024-10-14T11:20:58.096Z] 5757.50 IOPS, 44.98 MiB/s [2024-10-14T11:20:59.030Z] 5782.33 IOPS, 45.17 MiB/s [2024-10-14T11:21:00.404Z] 5785.75 IOPS, 45.20 MiB/s [2024-10-14T11:21:01.337Z] 5800.80 IOPS, 45.32 MiB/s [2024-10-14T11:21:02.269Z] 5801.17 IOPS, 45.32 MiB/s [2024-10-14T11:21:03.203Z] 5800.71 IOPS, 45.32 MiB/s [2024-10-14T11:21:04.138Z] 5801.62 IOPS, 45.33 MiB/s [2024-10-14T11:21:05.072Z] 5801.00 IOPS, 45.32 MiB/s [2024-10-14T11:21:05.072Z] 5801.30 IOPS, 45.32 MiB/s 00:10:13.219 Latency(us) 00:10:13.219 [2024-10-14T11:21:05.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.219 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:13.219 Verification LBA range: start 0x0 length 0x1000 00:10:13.219 Nvme1n1 : 10.02 5804.94 45.35 0.00 0.00 21991.28 3568.07 30680.56 00:10:13.219 [2024-10-14T11:21:05.072Z] =================================================================================================================== 00:10:13.219 [2024-10-14T11:21:05.072Z] Total : 5804.94 45.35 0.00 0.00 21991.28 3568.07 30680.56 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147945 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:13.478 { 00:10:13.478 "params": { 00:10:13.478 "name": "Nvme$subsystem", 00:10:13.478 "trtype": "$TEST_TRANSPORT", 00:10:13.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.478 "adrfam": "ipv4", 00:10:13.478 "trsvcid": "$NVMF_PORT", 00:10:13.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.478 "hdgst": ${hdgst:-false}, 00:10:13.478 "ddgst": ${ddgst:-false} 00:10:13.478 }, 00:10:13.478 "method": "bdev_nvme_attach_controller" 00:10:13.478 } 00:10:13.478 EOF 00:10:13.478 )") 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:13.478 [2024-10-14 13:21:05.242429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.242468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:13.478 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:13.478 "params": { 00:10:13.478 "name": "Nvme1", 00:10:13.478 "trtype": "tcp", 00:10:13.478 "traddr": "10.0.0.2", 00:10:13.478 "adrfam": "ipv4", 00:10:13.478 "trsvcid": "4420", 00:10:13.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.478 "hdgst": false, 00:10:13.478 "ddgst": false 00:10:13.478 }, 00:10:13.478 "method": "bdev_nvme_attach_controller" 00:10:13.478 }' 00:10:13.478 [2024-10-14 13:21:05.250383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.250408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.258404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.258440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.266443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.266463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.274458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.274478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.282492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.282512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.283764] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:10:13.478 [2024-10-14 13:21:05.283834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147945 ] 00:10:13.478 [2024-10-14 13:21:05.290495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.290515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.298533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.298564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.306549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.306570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.314568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.314588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.322591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.322611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.478 [2024-10-14 13:21:05.330619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.478 [2024-10-14 13:21:05.330640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.338636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.338657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.346660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.346681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.347983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.736 [2024-10-14 13:21:05.354703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.354730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.362736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.362773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.370722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.370743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.378740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.378760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.386763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.386784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.394788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.394809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.399285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.736 [2024-10-14 13:21:05.402807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.402834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.410827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.410848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.418886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.418922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.426905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.426939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.736 [2024-10-14 13:21:05.434933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.736 [2024-10-14 13:21:05.434968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.442954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.442989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.450975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.451010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.458991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.459026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.466983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.467005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.475034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.475069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.483057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.483092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.491042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.491062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.499066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.499088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.507189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.507215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.515195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.515221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.523216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.523240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.531231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.531254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.539258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.539282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.547254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.547278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.555277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.555305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.563304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.563325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.571322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.571343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.579345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.579366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.737 [2024-10-14 13:21:05.587391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.737 [2024-10-14 13:21:05.587429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.595423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.595446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.603433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.603454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.611449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.611484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.619470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.619489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.627492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.627511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.635504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.635526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.643521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.643541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.651552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.651571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.659575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.659594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.667611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.667630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.675620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.675640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.683645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.683666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.691686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.691711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.699714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.699735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 Running I/O for 5 seconds... 00:10:13.996 [2024-10-14 13:21:05.707739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.707780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.721951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.721979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.732480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.732508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.743565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.743591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.756586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.756613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.767671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.767699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.778715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.778742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.791337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.791364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.801348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.801375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.812376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.812420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.822690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.822717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.833594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.833620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.996 [2024-10-14 13:21:05.846904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.996 [2024-10-14 13:21:05.846931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.254 [2024-10-14 13:21:05.857406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.254 [2024-10-14 13:21:05.857448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.868206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.868233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.878589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.878615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.889249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.889276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.899893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.899919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.910584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.910610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.921265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.921292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.932225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.932253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.944616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.944642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.954755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.954781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.966021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.966047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.978776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.978802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.988977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.989005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:05.999912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:05.999939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.012470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.012498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.022662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.022690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.033338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.033365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.043897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.043923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.054752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.054794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.065784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.065812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.078500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.078526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.089310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.089337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.255 [2024-10-14 13:21:06.100762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.255 [2024-10-14 13:21:06.100790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.111903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.111930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.122710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.122737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.133663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.133690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.144932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.144959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.155686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.155713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.166309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.166336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.177204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.177231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.188400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.188441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.199360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.199388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.212031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.212058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.222034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.222061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.232311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.232339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.243195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.243222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.254066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.254092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.264714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.264741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.275528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.275554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.286538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.286566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.299088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.299116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.308910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.308937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.319842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.319869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.330679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.330714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.343239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.343267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.354862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.354888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.513 [2024-10-14 13:21:06.363966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.513 [2024-10-14 13:21:06.363994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.376066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.376093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.386847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.386874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.397964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.397990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.410543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.410569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.420193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.420222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.431066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.431093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.441827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.441853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.454730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.454756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.464691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.464717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.475299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.475326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.487892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.487918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.498075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.498101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.509081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.509123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.521850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.521876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.531990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.532016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.542372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.542406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.772 [2024-10-14 13:21:06.553268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.772 [2024-10-14 13:21:06.553295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.565819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.565845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.575590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.575616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.587109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.587144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.599824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.599849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.609798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.609824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.773 [2024-10-14 13:21:06.620666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.773 [2024-10-14 13:21:06.620692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.631777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.631804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.642799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.642825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.655095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.655122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.664934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.664960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.676209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.676236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.686843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.686869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.697819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.697845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 11686.00 IOPS, 91.30 MiB/s [2024-10-14T11:21:06.884Z] [2024-10-14 13:21:06.711431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.711457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.031 [2024-10-14 13:21:06.722032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.031 [2024-10-14 13:21:06.722059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.732837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.732863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.745322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.745348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.755588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.755622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.766408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.766451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.777590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.777616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.788583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.788610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.801297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.801324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.811608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.811634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.822265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.822292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.833218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.833245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.845802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.845829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.856157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.856184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.866668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.866694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.032 [2024-10-14 13:21:06.877320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.032 [2024-10-14 13:21:06.877347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.888177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.888205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.899173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.899200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.911797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.911824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.921984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.922010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.932441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.932482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.943019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.943045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.953890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.953916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.966567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.966594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.976605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.290 [2024-10-14 13:21:06.976631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.290 [2024-10-14 13:21:06.987568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:06.987595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.000288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.000316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.010498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.010524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.021275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.021303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.034775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.034803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.045613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.045639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.056490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.056517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.069373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.069400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.079827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.079854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.090883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.090910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.103667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.103694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.113814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.113841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.124213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.124240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.291 [2024-10-14 13:21:07.134553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.291 [2024-10-14 13:21:07.134579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.145418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.145445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.155976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.156002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.166641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.166667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.177398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.177440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.189853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.189879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.199499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.199528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.210954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.210982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.221968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.222009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.232926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.232952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.245714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.245741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.255763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.255789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.267095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.267145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.280186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.280213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.290985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.291012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.302178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.302206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.315688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.315715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.326380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.326407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.336764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.336790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.347507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.347533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.360260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.360287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.370271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.370299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.381602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.381639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.392308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.392336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.549 [2024-10-14 13:21:07.403677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.549 [2024-10-14 13:21:07.403705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.414785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.414812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.426059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.426086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.436708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.436734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.449615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.449643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.459969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.459997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.471352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.471380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.484418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.484459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.495143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.495182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.505905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.505931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.517374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.517411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.529194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.529222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.539860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.539886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.552290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.552318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.562016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.562042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.573644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.573670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.584820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.584846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.596041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.596067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.608242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.608269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.617898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.617923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.628893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.628920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.641768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.641795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.808 [2024-10-14 13:21:07.652102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.808 [2024-10-14 13:21:07.652153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.663186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.663214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.676288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.676315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.686913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.686939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.698208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.698235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 11679.50 IOPS, 91.25 MiB/s [2024-10-14T11:21:07.920Z] [2024-10-14 13:21:07.711276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.711303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.721479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.721506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.732034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.732060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.742612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.742638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.753475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.753506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.764433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.764460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.775739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.775768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.786785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.786812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.799885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.799911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.810516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.810565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.821306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.821333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.832124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.832159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.842511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.842537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.853246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.853273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.864256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.864283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.877174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.877201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.887366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.887408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.897729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.897755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.908798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.908825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.067 [2024-10-14 13:21:07.919608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.067 [2024-10-14 13:21:07.919636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.930325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.930353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.943090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.943116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.953375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.953402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.964270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.964298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.975214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.975242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.986208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.986236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:07.999094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:07.999146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.009212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.009240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.020021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.020071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.032826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.032853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.043243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.043270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.053905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.053932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.064712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.064738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.326 [2024-10-14 13:21:08.075245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.326 [2024-10-14 13:21:08.075272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.086162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.086198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.096771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.096797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.107618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.107643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.117846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.117872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.128516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.128542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.139002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.139029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.149719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.149745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.160597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.160623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.327 [2024-10-14 13:21:08.173198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.327 [2024-10-14 13:21:08.173224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.183799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.183826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.194644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.194671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.207427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.207454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.218056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.218083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.228528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.228562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.239281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.585 [2024-10-14 13:21:08.239308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.585 [2024-10-14 13:21:08.251767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.251793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.261503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.261530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.272850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.272877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.285830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.285856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.296703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.296729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.307636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.307663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.320339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.320367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.330623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.330649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.341746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.341772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.352773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.352799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.363952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.363978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.376626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.376652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.386790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.386816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.397721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.397747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.410241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.410269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.419710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.419736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.586 [2024-10-14 13:21:08.430972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.586 [2024-10-14 13:21:08.430998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.442357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.442395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.453392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.453433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.464561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.464587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.474873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.474898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.485562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.485589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.498600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.498627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.508983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.509009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.519822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.519848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.532642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.532667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.543223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.543250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.554257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.554285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.566980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.567008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.578694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.578735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.588074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.588101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.599531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.599561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.612100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.612151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.844 [2024-10-14 13:21:08.622032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.844 [2024-10-14 13:21:08.622058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.632965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.632992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.644008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.644034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.655111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.655156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.668229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.668257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.678632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.678659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.845 [2024-10-14 13:21:08.689083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.845 [2024-10-14 13:21:08.689111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.700093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.700121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 11701.67 IOPS, 91.42 MiB/s [2024-10-14T11:21:08.956Z] [2024-10-14 13:21:08.712820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.712848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.723078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.723105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.733723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.733750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.744163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.744190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.754689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.754730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.765774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.765801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.776468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.776494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.786866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.786892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.797310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.797337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.807902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.807929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.818712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.818739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.831373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.831400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.841680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.841707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.852468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.852519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.863120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.863157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.873985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.874012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.886996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.887023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.897563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.897589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.908625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.908651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.921284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.921312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.931704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.931731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.942661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.942687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.103 [2024-10-14 13:21:08.955787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.103 [2024-10-14 13:21:08.955815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:08.966356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:08.966383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:08.977541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:08.977569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:08.990077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:08.990103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.000149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.000177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.010845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.010872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.023553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.023580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.033984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.034011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.044914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.044941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.057729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.057766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.068211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.068244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.079414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.079441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.090656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.090683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.101725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.101752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.112520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.112547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.123722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.123749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.136992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.137019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.147693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.147719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.158395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.158438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.169301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.361 [2024-10-14 13:21:09.169328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.361 [2024-10-14 13:21:09.179959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.362 [2024-10-14 13:21:09.179985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.362 [2024-10-14 13:21:09.190664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.362 [2024-10-14 13:21:09.190709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.362 [2024-10-14 13:21:09.201176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.362 [2024-10-14 13:21:09.201203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.362 [2024-10-14 13:21:09.212157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.362 [2024-10-14 13:21:09.212185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.620 [2024-10-14 13:21:09.223039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.620 [2024-10-14 13:21:09.223066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.620 [2024-10-14 13:21:09.235810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.235837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.246422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.246449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.257250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.257277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.269901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.269926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.280252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.280292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.291311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.291338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.302364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.302409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.313080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.313106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.325359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.325393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.334755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.334782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.346199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.346226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.358574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.358600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.368347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.368374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.379111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.379146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.389593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.389619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.400092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.400156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.410794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.410820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.423805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.423845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.434305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.434332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.444913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.444939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.458465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.458503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.621 [2024-10-14 13:21:09.470333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.621 [2024-10-14 13:21:09.470375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.480019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.480046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.491957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.491991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.502981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.503008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.513641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.513667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.524505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.524531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.535486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.535512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.548504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.548531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.559139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.559167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.569493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.569519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.580072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.580099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.592709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.592737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.602965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.602990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.613715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.613741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.624959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.624984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.635584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.635610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.648225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.648252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.659064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.659090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.670002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.670028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.682652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.682678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.692704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.692750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.704462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.704499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 11708.75 IOPS, 91.47 MiB/s [2024-10-14T11:21:09.732Z] [2024-10-14 13:21:09.715605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.715633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.879 [2024-10-14 13:21:09.726742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.879 [2024-10-14 13:21:09.726769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.737502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.737529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.748291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.748318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.759524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.759551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.772726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.772752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.783288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.783315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.794007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.794034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.806736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.806774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.817004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.817031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.827719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.827746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.838786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.838813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.849723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.849750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.863096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.863145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.873656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.873683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.884306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.884333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.895360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.895386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.906123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.906158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.919741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.919767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.930215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.930242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.941362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.941388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.954145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.954171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.965891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.965916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.974577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.974603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.138 [2024-10-14 13:21:09.986349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.138 [2024-10-14 13:21:09.986377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:09.997139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:09.997166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.008597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.008630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.019570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.019598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.031222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.031252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.042299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.042326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.052919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.052947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.063791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.063818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.075264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.075292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.086743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.086770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.100285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.100312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.110465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.110492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.121621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.121648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.132917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.132944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.143842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.143868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.154896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.154924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.167450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.167477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.179249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.179275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.188581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.188607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.199732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.199758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.212753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.212781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.223173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.223209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.233995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.234023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.397 [2024-10-14 13:21:10.244691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.397 [2024-10-14 13:21:10.244718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.255895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.255921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.266655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.266681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.277730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.277757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.290228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.290255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.300013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.300039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.311683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.311709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.323029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.323056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.334061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.334111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.347728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.347755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.358387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.358414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.369093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.369143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.382212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.382238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.392777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.392803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.403730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.403756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.416199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.416226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.426493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.426519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.437802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.437829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.448722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.448749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.459951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.459977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.470494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.470520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.481034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.481059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.491764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.491791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.656 [2024-10-14 13:21:10.502746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.656 [2024-10-14 13:21:10.502772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.914 [2024-10-14 13:21:10.515912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.515938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.526580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.526607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.541802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.541830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.552073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.552121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.562759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.562786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.573747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.573781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.585369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.585396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.596777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.596804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.610204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.610232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.620658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.620683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.631297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.631323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.642158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.642186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.652745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.652771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.663968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.663995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.675204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.675231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.687851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.687878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.698265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.698296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.709233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.709260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 11689.00 IOPS, 91.32 MiB/s [2024-10-14T11:21:10.768Z] [2024-10-14 13:21:10.720780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.720805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.726559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.726583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 00:10:18.915 Latency(us) 00:10:18.915 [2024-10-14T11:21:10.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.915 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:18.915 Nvme1n1 : 5.01 11691.32 91.34 0.00 0.00 10933.96 4854.52 18447.17 00:10:18.915 [2024-10-14T11:21:10.768Z] =================================================================================================================== 00:10:18.915 [2024-10-14T11:21:10.768Z] Total : 11691.32 91.34 0.00 0.00 10933.96 4854.52 18447.17 00:10:18.915 [2024-10-14 13:21:10.734576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.734598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.742602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.742628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.750673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.750724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.758701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.758750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.915 [2024-10-14 13:21:10.766720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.915 [2024-10-14 13:21:10.766767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.774744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.774791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.782756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.782804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.790780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.790829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.798805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.798851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.806827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.806874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.814853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.814903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.822884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.822927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.830893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.830938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.838916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.838963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.173 [2024-10-14 13:21:10.846936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.173 [2024-10-14 13:21:10.846986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.854956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.855000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.863005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.863046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.870951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.870972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.878987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.879015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.887052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.887098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.895076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.895118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.903037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.903057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.911055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.911075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 [2024-10-14 13:21:10.919076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.174 [2024-10-14 13:21:10.919097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147945) - No such process 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147945 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.174 delay0 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.174 13:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:19.432 [2024-10-14 13:21:11.031882] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:25.990 Initializing NVMe Controllers 00:10:25.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:25.990 Initialization complete. Launching workers. 00:10:25.990 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 111 00:10:25.990 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 398, failed to submit 33 00:10:25.990 success 238, unsuccessful 160, failed 0 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.990 rmmod nvme_tcp 00:10:25.990 rmmod nvme_fabrics 00:10:25.990 rmmod nvme_keyring 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 146535 ']' 00:10:25.990 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 146535 ']' 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 146535' 00:10:25.991 killing process with pid 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 146535 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.991 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.909 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.909 00:10:27.909 real 0m27.826s 00:10:27.909 user 0m41.707s 00:10:27.909 sys 0m7.501s 00:10:27.909 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.910 ************************************ 00:10:27.910 END TEST nvmf_zcopy 00:10:27.910 ************************************ 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.910 ************************************ 00:10:27.910 START TEST nvmf_nmic 00:10:27.910 ************************************ 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.910 * Looking for test storage... 00:10:27.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.910 --rc genhtml_branch_coverage=1 00:10:27.910 --rc genhtml_function_coverage=1 00:10:27.910 --rc genhtml_legend=1 00:10:27.910 --rc geninfo_all_blocks=1 00:10:27.910 --rc geninfo_unexecuted_blocks=1 00:10:27.910 00:10:27.910 ' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.910 --rc genhtml_branch_coverage=1 00:10:27.910 --rc genhtml_function_coverage=1 00:10:27.910 --rc genhtml_legend=1 00:10:27.910 --rc geninfo_all_blocks=1 00:10:27.910 --rc geninfo_unexecuted_blocks=1 00:10:27.910 00:10:27.910 ' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.910 --rc genhtml_branch_coverage=1 00:10:27.910 --rc genhtml_function_coverage=1 00:10:27.910 --rc genhtml_legend=1 00:10:27.910 --rc geninfo_all_blocks=1 00:10:27.910 --rc geninfo_unexecuted_blocks=1 00:10:27.910 00:10:27.910 ' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.910 --rc genhtml_branch_coverage=1 00:10:27.910 --rc genhtml_function_coverage=1 00:10:27.910 --rc genhtml_legend=1 00:10:27.910 --rc geninfo_all_blocks=1 00:10:27.910 --rc geninfo_unexecuted_blocks=1 00:10:27.910 00:10:27.910 ' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.910 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.911 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:30.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:30.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:30.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:30.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.446 13:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.446 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.446 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.446 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.446 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:10:30.446 00:10:30.446 --- 10.0.0.2 ping statistics --- 00:10:30.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.447 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:10:30.447 00:10:30.447 --- 10.0.0.1 ping statistics --- 00:10:30.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.447 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=151775 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 151775 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 151775 ']' 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.447 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.447 [2024-10-14 13:21:22.125827] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:10:30.447 [2024-10-14 13:21:22.125907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.447 [2024-10-14 13:21:22.187943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.447 [2024-10-14 13:21:22.237445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.447 [2024-10-14 13:21:22.237492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.447 [2024-10-14 13:21:22.237507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.447 [2024-10-14 13:21:22.237519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.447 [2024-10-14 13:21:22.237529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.447 [2024-10-14 13:21:22.239164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.447 [2024-10-14 13:21:22.239222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.447 [2024-10-14 13:21:22.239196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.447 [2024-10-14 13:21:22.239224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 [2024-10-14 13:21:22.389615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 Malloc0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 [2024-10-14 13:21:22.460206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:30.706 test case1: single bdev can't be used in multiple subsystems 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 [2024-10-14 13:21:22.484007] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:30.706 [2024-10-14 13:21:22.484036] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:30.706 [2024-10-14 13:21:22.484066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.706 request: 00:10:30.706 { 00:10:30.706 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.706 "namespace": { 00:10:30.706 "bdev_name": "Malloc0", 00:10:30.706 "no_auto_visible": false 00:10:30.706 }, 00:10:30.706 "method": "nvmf_subsystem_add_ns", 00:10:30.706 "req_id": 1 00:10:30.706 } 00:10:30.706 Got JSON-RPC error response 00:10:30.706 response: 00:10:30.706 { 00:10:30.706 "code": -32602, 00:10:30.706 "message": "Invalid parameters" 00:10:30.706 } 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:30.706 Adding namespace failed - expected result. 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:30.706 test case2: host connect to nvmf target in multiple paths 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.706 [2024-10-14 13:21:22.492152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.706 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.640 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:32.205 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.205 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.205 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.205 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:32.205 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:34.102 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.102 [global] 00:10:34.102 thread=1 00:10:34.102 invalidate=1 00:10:34.102 rw=write 00:10:34.102 time_based=1 00:10:34.102 runtime=1 00:10:34.102 ioengine=libaio 00:10:34.102 direct=1 00:10:34.102 bs=4096 00:10:34.102 iodepth=1 00:10:34.102 norandommap=0 00:10:34.102 numjobs=1 00:10:34.102 00:10:34.102 verify_dump=1 00:10:34.102 verify_backlog=512 00:10:34.102 verify_state_save=0 00:10:34.102 do_verify=1 00:10:34.102 verify=crc32c-intel 00:10:34.102 [job0] 00:10:34.102 filename=/dev/nvme0n1 00:10:34.102 Could not set queue depth (nvme0n1) 00:10:34.668 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.668 fio-3.35 00:10:34.668 Starting 1 thread 00:10:36.041 00:10:36.041 job0: (groupid=0, jobs=1): err= 0: pid=152418: Mon Oct 14 13:21:27 2024 00:10:36.041 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:36.041 slat (nsec): min=4969, max=63796, avg=14034.86, stdev=9040.28 00:10:36.041 clat (usec): min=161, max=516, avg=240.36, stdev=47.78 00:10:36.041 lat (usec): min=166, max=523, avg=254.39, stdev=52.69 00:10:36.041 clat percentiles (usec): 00:10:36.041 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:10:36.041 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 245], 00:10:36.041 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:10:36.041 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 429], 00:10:36.041 | 99.99th=[ 519] 00:10:36.041 write: IOPS=2531, BW=9.89MiB/s (10.4MB/s)(9.90MiB/1001msec); 0 zone resets 00:10:36.041 slat (nsec): min=6585, max=69333, avg=13772.68, stdev=6461.20 00:10:36.041 clat (usec): min=121, max=375, avg=167.99, stdev=43.12 00:10:36.041 lat (usec): min=128, max=423, avg=181.77, stdev=45.80 00:10:36.041 clat percentiles (usec): 00:10:36.041 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:36.041 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:10:36.041 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 241], 95.00th=[ 281], 00:10:36.041 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 351], 00:10:36.041 | 99.99th=[ 375] 00:10:36.041 bw ( KiB/s): min=10576, max=10576, per=100.00%, avg=10576.00, stdev= 0.00, samples=1 00:10:36.041 iops : min= 2644, max= 2644, avg=2644.00, stdev= 0.00, samples=1 00:10:36.041 lat (usec) : 250=78.20%, 500=21.78%, 750=0.02% 00:10:36.041 cpu : usr=4.00%, sys=6.80%, ctx=4582, majf=0, minf=1 00:10:36.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.041 issued rwts: total=2048,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.041 00:10:36.041 Run status group 0 (all jobs): 00:10:36.042 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:36.042 WRITE: bw=9.89MiB/s (10.4MB/s), 9.89MiB/s-9.89MiB/s (10.4MB/s-10.4MB/s), io=9.90MiB (10.4MB), run=1001-1001msec 00:10:36.042 00:10:36.042 Disk stats (read/write): 00:10:36.042 nvme0n1: ios=2098/2066, merge=0/0, ticks=493/324, in_queue=817, util=91.58% 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.042 rmmod nvme_tcp 00:10:36.042 rmmod nvme_fabrics 00:10:36.042 rmmod nvme_keyring 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 151775 ']' 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 151775 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 151775 ']' 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 151775 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 151775 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 151775' 00:10:36.042 killing process with pid 151775 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 151775 00:10:36.042 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 151775 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.306 13:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.210 00:10:38.210 real 0m10.393s 00:10:38.210 user 0m23.530s 00:10:38.210 sys 0m3.026s 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.210 ************************************ 00:10:38.210 END TEST nvmf_nmic 00:10:38.210 ************************************ 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.210 13:21:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.210 ************************************ 00:10:38.210 START TEST nvmf_fio_target 00:10:38.210 ************************************ 00:10:38.210 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:38.210 * Looking for test storage... 00:10:38.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:38.469 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.470 --rc genhtml_branch_coverage=1 00:10:38.470 --rc genhtml_function_coverage=1 00:10:38.470 --rc genhtml_legend=1 00:10:38.470 --rc geninfo_all_blocks=1 00:10:38.470 --rc geninfo_unexecuted_blocks=1 00:10:38.470 00:10:38.470 ' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.470 --rc genhtml_branch_coverage=1 00:10:38.470 --rc genhtml_function_coverage=1 00:10:38.470 --rc genhtml_legend=1 00:10:38.470 --rc geninfo_all_blocks=1 00:10:38.470 --rc geninfo_unexecuted_blocks=1 00:10:38.470 00:10:38.470 ' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.470 --rc genhtml_branch_coverage=1 00:10:38.470 --rc genhtml_function_coverage=1 00:10:38.470 --rc genhtml_legend=1 00:10:38.470 --rc geninfo_all_blocks=1 00:10:38.470 --rc geninfo_unexecuted_blocks=1 00:10:38.470 00:10:38.470 ' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.470 --rc genhtml_branch_coverage=1 00:10:38.470 --rc genhtml_function_coverage=1 00:10:38.470 --rc genhtml_legend=1 00:10:38.470 --rc geninfo_all_blocks=1 00:10:38.470 --rc geninfo_unexecuted_blocks=1 00:10:38.470 00:10:38.470 ' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.470 13:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:40.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:40.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.373 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:40.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:40.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.374 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:10:40.633 00:10:40.633 --- 10.0.0.2 ping statistics --- 00:10:40.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.633 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:10:40.633 00:10:40.633 --- 10.0.0.1 ping statistics --- 00:10:40.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.633 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=154501 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 154501 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 154501 ']' 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.633 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 [2024-10-14 13:21:32.371783] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:10:40.633 [2024-10-14 13:21:32.371858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.633 [2024-10-14 13:21:32.437390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.633 [2024-10-14 13:21:32.486536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.633 [2024-10-14 13:21:32.486603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.633 [2024-10-14 13:21:32.486618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.633 [2024-10-14 13:21:32.486653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.633 [2024-10-14 13:21:32.486663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.892 [2024-10-14 13:21:32.490895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.892 [2024-10-14 13:21:32.490986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.892 [2024-10-14 13:21:32.491124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.892 [2024-10-14 13:21:32.491137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.892 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.150 [2024-10-14 13:21:32.890397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.150 13:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.408 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:41.408 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.666 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:41.666 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.232 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:42.232 13:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.490 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:42.490 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:42.747 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.005 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:43.005 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.263 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:43.263 13:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.521 13:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:43.521 13:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:43.779 13:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.037 13:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.037 13:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.294 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.294 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.552 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.810 [2024-10-14 13:21:36.608373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.810 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:45.067 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:45.325 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:46.256 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:48.152 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.152 [global] 00:10:48.152 thread=1 00:10:48.152 invalidate=1 00:10:48.152 rw=write 00:10:48.152 time_based=1 00:10:48.152 runtime=1 00:10:48.152 ioengine=libaio 00:10:48.152 direct=1 00:10:48.152 bs=4096 00:10:48.152 iodepth=1 00:10:48.152 norandommap=0 00:10:48.152 numjobs=1 00:10:48.152 00:10:48.152 verify_dump=1 00:10:48.152 verify_backlog=512 00:10:48.152 verify_state_save=0 00:10:48.152 do_verify=1 00:10:48.152 verify=crc32c-intel 00:10:48.152 [job0] 00:10:48.152 filename=/dev/nvme0n1 00:10:48.152 [job1] 00:10:48.152 filename=/dev/nvme0n2 00:10:48.152 [job2] 00:10:48.152 filename=/dev/nvme0n3 00:10:48.152 [job3] 00:10:48.152 filename=/dev/nvme0n4 00:10:48.152 Could not set queue depth (nvme0n1) 00:10:48.152 Could not set queue depth (nvme0n2) 00:10:48.152 Could not set queue depth (nvme0n3) 00:10:48.152 Could not set queue depth (nvme0n4) 00:10:48.410 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.410 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.410 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.410 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.410 fio-3.35 00:10:48.410 Starting 4 threads 00:10:49.782 00:10:49.782 job0: (groupid=0, jobs=1): err= 0: pid=155588: Mon Oct 14 13:21:41 2024 00:10:49.782 read: IOPS=2395, BW=9582KiB/s (9812kB/s)(9592KiB/1001msec) 00:10:49.782 slat (nsec): min=4586, max=50527, avg=11836.01, stdev=3756.85 00:10:49.782 clat (usec): min=176, max=516, avg=213.28, stdev=30.02 00:10:49.782 lat (usec): min=181, max=550, avg=225.11, stdev=32.02 00:10:49.782 clat percentiles (usec): 00:10:49.782 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:10:49.782 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:49.782 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:10:49.782 | 99.00th=[ 338], 99.50th=[ 474], 99.90th=[ 498], 99.95th=[ 506], 00:10:49.782 | 99.99th=[ 519] 00:10:49.782 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:49.782 slat (nsec): min=6551, max=68301, avg=15204.46, stdev=4878.15 00:10:49.782 clat (usec): min=132, max=398, avg=157.22, stdev=15.15 00:10:49.782 lat (usec): min=141, max=414, avg=172.42, stdev=16.34 00:10:49.782 clat percentiles (usec): 00:10:49.782 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:10:49.783 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:10:49.783 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 182], 00:10:49.783 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 310], 99.95th=[ 388], 00:10:49.783 | 99.99th=[ 400] 00:10:49.783 bw ( KiB/s): min=12288, max=12288, per=55.85%, avg=12288.00, stdev= 0.00, samples=1 00:10:49.783 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:49.783 lat (usec) : 250=98.02%, 500=1.94%, 750=0.04% 00:10:49.783 cpu : usr=2.90%, sys=7.80%, ctx=4960, majf=0, minf=1 00:10:49.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 issued rwts: total=2398,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.783 job1: (groupid=0, jobs=1): err= 0: pid=155589: Mon Oct 14 13:21:41 2024 00:10:49.783 read: IOPS=1831, BW=7325KiB/s (7500kB/s)(7332KiB/1001msec) 00:10:49.783 slat (nsec): min=5704, max=48288, avg=16354.94, stdev=5338.12 00:10:49.783 clat (usec): min=185, max=601, avg=273.55, stdev=81.56 00:10:49.783 lat (usec): min=191, max=618, avg=289.90, stdev=83.86 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 233], 00:10:49.783 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:49.783 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 408], 95.00th=[ 490], 00:10:49.783 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 603], 00:10:49.783 | 99.99th=[ 603] 00:10:49.783 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:49.783 slat (nsec): min=7567, max=53839, avg=19118.33, stdev=5673.64 00:10:49.783 clat (usec): min=152, max=432, avg=199.78, stdev=32.31 00:10:49.783 lat (usec): min=173, max=441, avg=218.90, stdev=28.67 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:10:49.783 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:10:49.783 | 70.00th=[ 202], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 265], 00:10:49.783 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 400], 99.95th=[ 412], 00:10:49.783 | 99.99th=[ 433] 00:10:49.783 bw ( KiB/s): min= 8192, max= 8192, per=37.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:49.783 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:49.783 lat (usec) : 250=72.66%, 500=25.33%, 750=2.01% 00:10:49.783 cpu : usr=5.50%, sys=9.20%, ctx=3881, majf=0, minf=1 00:10:49.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 issued rwts: total=1833,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.783 job2: (groupid=0, jobs=1): err= 0: pid=155590: Mon Oct 14 13:21:41 2024 00:10:49.783 read: IOPS=26, BW=105KiB/s (108kB/s)(108KiB/1024msec) 00:10:49.783 slat (nsec): min=5484, max=65995, avg=27994.26, stdev=13507.27 00:10:49.783 clat (usec): min=242, max=42189, avg=34121.72, stdev=16453.58 00:10:49.783 lat (usec): min=252, max=42195, avg=34149.72, stdev=16457.47 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[41157], 00:10:49.783 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:49.783 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:49.783 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.783 | 99.99th=[42206] 00:10:49.783 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:49.783 slat (nsec): min=6289, max=53327, avg=10041.43, stdev=4466.98 00:10:49.783 clat (usec): min=148, max=307, avg=185.64, stdev=18.90 00:10:49.783 lat (usec): min=155, max=314, avg=195.68, stdev=20.10 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:49.783 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:10:49.783 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:10:49.783 | 99.00th=[ 233], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 310], 00:10:49.783 | 99.99th=[ 310] 00:10:49.783 bw ( KiB/s): min= 4096, max= 4096, per=18.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.783 lat (usec) : 250=94.99%, 500=0.93% 00:10:49.783 lat (msec) : 50=4.08% 00:10:49.783 cpu : usr=0.39%, sys=0.29%, ctx=539, majf=0, minf=1 00:10:49.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.783 job3: (groupid=0, jobs=1): err= 0: pid=155591: Mon Oct 14 13:21:41 2024 00:10:49.783 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:10:49.783 slat (nsec): min=13323, max=33528, avg=29928.48, stdev=7090.45 00:10:49.783 clat (usec): min=40883, max=42361, avg=41599.47, stdev=524.03 00:10:49.783 lat (usec): min=40916, max=42378, avg=41629.40, stdev=521.01 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:49.783 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:49.783 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:49.783 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.783 | 99.99th=[42206] 00:10:49.783 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:49.783 slat (nsec): min=6665, max=44302, avg=10965.27, stdev=4930.12 00:10:49.783 clat (usec): min=162, max=407, avg=242.90, stdev=34.04 00:10:49.783 lat (usec): min=174, max=417, avg=253.87, stdev=31.39 00:10:49.783 clat percentiles (usec): 00:10:49.783 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 210], 00:10:49.783 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:10:49.783 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:10:49.783 | 99.00th=[ 351], 99.50th=[ 392], 99.90th=[ 408], 99.95th=[ 408], 00:10:49.783 | 99.99th=[ 408] 00:10:49.783 bw ( KiB/s): min= 4096, max= 4096, per=18.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.783 lat (usec) : 250=56.85%, 500=39.21% 00:10:49.783 lat (msec) : 50=3.94% 00:10:49.783 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=2 00:10:49.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.783 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.783 00:10:49.783 Run status group 0 (all jobs): 00:10:49.783 READ: bw=16.3MiB/s (17.1MB/s), 83.5KiB/s-9582KiB/s (85.5kB/s-9812kB/s), io=16.7MiB (17.5MB), run=1001-1024msec 00:10:49.783 WRITE: bw=21.5MiB/s (22.5MB/s), 2000KiB/s-9.99MiB/s (2048kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1024msec 00:10:49.783 00:10:49.783 Disk stats (read/write): 00:10:49.783 nvme0n1: ios=2074/2180, merge=0/0, ticks=1392/326, in_queue=1718, util=97.19% 00:10:49.783 nvme0n2: ios=1536/1690, merge=0/0, ticks=395/315, in_queue=710, util=86.12% 00:10:49.783 nvme0n3: ios=22/512, merge=0/0, ticks=716/94, in_queue=810, util=88.94% 00:10:49.783 nvme0n4: ios=17/512, merge=0/0, ticks=711/127, in_queue=838, util=89.60% 00:10:49.783 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:49.783 [global] 00:10:49.783 thread=1 00:10:49.783 invalidate=1 00:10:49.783 rw=randwrite 00:10:49.783 time_based=1 00:10:49.783 runtime=1 00:10:49.783 ioengine=libaio 00:10:49.783 direct=1 00:10:49.783 bs=4096 00:10:49.783 iodepth=1 00:10:49.783 norandommap=0 00:10:49.783 numjobs=1 00:10:49.783 00:10:49.783 verify_dump=1 00:10:49.783 verify_backlog=512 00:10:49.783 verify_state_save=0 00:10:49.783 do_verify=1 00:10:49.783 verify=crc32c-intel 00:10:49.783 [job0] 00:10:49.783 filename=/dev/nvme0n1 00:10:49.783 [job1] 00:10:49.783 filename=/dev/nvme0n2 00:10:49.783 [job2] 00:10:49.783 filename=/dev/nvme0n3 00:10:49.783 [job3] 00:10:49.783 filename=/dev/nvme0n4 00:10:49.783 Could not set queue depth (nvme0n1) 00:10:49.783 Could not set queue depth (nvme0n2) 00:10:49.783 Could not set queue depth (nvme0n3) 00:10:49.783 Could not set queue depth (nvme0n4) 00:10:49.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.783 fio-3.35 00:10:49.783 Starting 4 threads 00:10:51.158 00:10:51.158 job0: (groupid=0, jobs=1): err= 0: pid=155943: Mon Oct 14 13:21:42 2024 00:10:51.158 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:10:51.158 slat (nsec): min=12407, max=36145, avg=26018.74, stdev=9491.84 00:10:51.158 clat (usec): min=286, max=45040, avg=39739.16, stdev=8647.00 00:10:51.158 lat (usec): min=305, max=45057, avg=39765.18, stdev=8648.22 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 285], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:51.158 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:51.158 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:51.158 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:51.158 | 99.99th=[44827] 00:10:51.158 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:51.158 slat (nsec): min=7448, max=55590, avg=17281.58, stdev=7958.29 00:10:51.158 clat (usec): min=149, max=250, avg=192.09, stdev=15.57 00:10:51.158 lat (usec): min=158, max=298, avg=209.37, stdev=19.20 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:10:51.158 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:10:51.158 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:10:51.158 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 251], 99.95th=[ 251], 00:10:51.158 | 99.99th=[ 251] 00:10:51.158 bw ( KiB/s): min= 4096, max= 4096, per=28.83%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.158 lat (usec) : 250=95.51%, 500=0.37% 00:10:51.158 lat (msec) : 50=4.11% 00:10:51.158 cpu : usr=0.98%, sys=0.78%, ctx=535, majf=0, minf=2 00:10:51.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.158 job1: (groupid=0, jobs=1): err= 0: pid=155944: Mon Oct 14 13:21:42 2024 00:10:51.158 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:51.158 slat (nsec): min=5332, max=53805, avg=13162.53, stdev=5269.82 00:10:51.158 clat (usec): min=181, max=2381, avg=258.99, stdev=79.86 00:10:51.158 lat (usec): min=187, max=2391, avg=272.16, stdev=81.80 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:10:51.158 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:10:51.158 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 420], 95.00th=[ 429], 00:10:51.158 | 99.00th=[ 437], 99.50th=[ 441], 99.90th=[ 445], 99.95th=[ 449], 00:10:51.158 | 99.99th=[ 2376] 00:10:51.158 write: IOPS=2102, BW=8412KiB/s (8613kB/s)(8420KiB/1001msec); 0 zone resets 00:10:51.158 slat (nsec): min=6729, max=57529, avg=16936.06, stdev=6063.69 00:10:51.158 clat (usec): min=130, max=891, avg=184.09, stdev=41.34 00:10:51.158 lat (usec): min=138, max=900, avg=201.03, stdev=42.88 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:10:51.158 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:51.158 | 70.00th=[ 186], 80.00th=[ 204], 90.00th=[ 229], 95.00th=[ 245], 00:10:51.158 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 758], 99.95th=[ 832], 00:10:51.158 | 99.99th=[ 889] 00:10:51.158 bw ( KiB/s): min= 8192, max= 8192, per=57.65%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.158 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.158 lat (usec) : 250=79.92%, 500=19.96%, 750=0.02%, 1000=0.07% 00:10:51.158 lat (msec) : 4=0.02% 00:10:51.158 cpu : usr=5.40%, sys=8.20%, ctx=4153, majf=0, minf=1 00:10:51.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 issued rwts: total=2048,2105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.158 job2: (groupid=0, jobs=1): err= 0: pid=155946: Mon Oct 14 13:21:42 2024 00:10:51.158 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:10:51.158 slat (nsec): min=12393, max=37395, avg=27500.73, stdev=9560.27 00:10:51.158 clat (usec): min=281, max=42023, avg=39896.13, stdev=8856.08 00:10:51.158 lat (usec): min=312, max=42036, avg=39923.63, stdev=8855.49 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 281], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:51.158 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:51.158 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:51.158 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:51.158 | 99.99th=[42206] 00:10:51.158 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:51.158 slat (nsec): min=7818, max=60087, avg=19509.03, stdev=8564.40 00:10:51.158 clat (usec): min=164, max=357, avg=234.96, stdev=22.52 00:10:51.158 lat (usec): min=174, max=381, avg=254.46, stdev=23.15 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 219], 00:10:51.158 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:10:51.158 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:10:51.158 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 359], 00:10:51.158 | 99.99th=[ 359] 00:10:51.158 bw ( KiB/s): min= 4096, max= 4096, per=28.83%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.158 lat (usec) : 250=70.97%, 500=25.09% 00:10:51.158 lat (msec) : 50=3.93% 00:10:51.158 cpu : usr=0.99%, sys=0.99%, ctx=535, majf=0, minf=1 00:10:51.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.158 job3: (groupid=0, jobs=1): err= 0: pid=155947: Mon Oct 14 13:21:42 2024 00:10:51.158 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:10:51.158 slat (nsec): min=13153, max=34755, avg=27152.38, stdev=9265.21 00:10:51.158 clat (usec): min=41901, max=42317, avg=41979.67, stdev=80.42 00:10:51.158 lat (usec): min=41936, max=42331, avg=42006.82, stdev=76.40 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:10:51.158 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:51.158 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:51.158 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:51.158 | 99.99th=[42206] 00:10:51.158 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:51.158 slat (nsec): min=6144, max=63695, avg=14595.34, stdev=6529.16 00:10:51.158 clat (usec): min=159, max=4152, avg=218.73, stdev=196.57 00:10:51.158 lat (usec): min=173, max=4170, avg=233.32, stdev=197.91 00:10:51.158 clat percentiles (usec): 00:10:51.158 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:10:51.158 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:10:51.158 | 70.00th=[ 208], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 277], 00:10:51.158 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 4146], 99.95th=[ 4146], 00:10:51.158 | 99.99th=[ 4146] 00:10:51.158 bw ( KiB/s): min= 4096, max= 4096, per=28.83%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.158 lat (usec) : 250=85.93%, 500=9.76% 00:10:51.158 lat (msec) : 4=0.19%, 10=0.19%, 50=3.94% 00:10:51.158 cpu : usr=0.30%, sys=0.80%, ctx=536, majf=0, minf=1 00:10:51.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.158 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.158 00:10:51.158 Run status group 0 (all jobs): 00:10:51.158 READ: bw=8250KiB/s (8448kB/s), 83.7KiB/s-8184KiB/s (85.7kB/s-8380kB/s), io=8456KiB (8659kB), run=1001-1025msec 00:10:51.158 WRITE: bw=13.9MiB/s (14.5MB/s), 1998KiB/s-8412KiB/s (2046kB/s-8613kB/s), io=14.2MiB (14.9MB), run=1001-1025msec 00:10:51.158 00:10:51.158 Disk stats (read/write): 00:10:51.158 nvme0n1: ios=68/512, merge=0/0, ticks=729/92, in_queue=821, util=86.77% 00:10:51.158 nvme0n2: ios=1586/1903, merge=0/0, ticks=450/341, in_queue=791, util=90.76% 00:10:51.158 nvme0n3: ios=63/512, merge=0/0, ticks=989/116, in_queue=1105, util=98.75% 00:10:51.158 nvme0n4: ios=68/512, merge=0/0, ticks=1411/103, in_queue=1514, util=98.53% 00:10:51.158 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:51.158 [global] 00:10:51.158 thread=1 00:10:51.158 invalidate=1 00:10:51.158 rw=write 00:10:51.158 time_based=1 00:10:51.158 runtime=1 00:10:51.158 ioengine=libaio 00:10:51.158 direct=1 00:10:51.158 bs=4096 00:10:51.158 iodepth=128 00:10:51.158 norandommap=0 00:10:51.158 numjobs=1 00:10:51.158 00:10:51.158 verify_dump=1 00:10:51.158 verify_backlog=512 00:10:51.158 verify_state_save=0 00:10:51.158 do_verify=1 00:10:51.158 verify=crc32c-intel 00:10:51.158 [job0] 00:10:51.158 filename=/dev/nvme0n1 00:10:51.158 [job1] 00:10:51.158 filename=/dev/nvme0n2 00:10:51.158 [job2] 00:10:51.158 filename=/dev/nvme0n3 00:10:51.158 [job3] 00:10:51.158 filename=/dev/nvme0n4 00:10:51.158 Could not set queue depth (nvme0n1) 00:10:51.158 Could not set queue depth (nvme0n2) 00:10:51.158 Could not set queue depth (nvme0n3) 00:10:51.158 Could not set queue depth (nvme0n4) 00:10:51.417 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.417 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.417 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.417 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.417 fio-3.35 00:10:51.417 Starting 4 threads 00:10:52.792 00:10:52.792 job0: (groupid=0, jobs=1): err= 0: pid=156174: Mon Oct 14 13:21:44 2024 00:10:52.792 read: IOPS=4766, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1002msec) 00:10:52.792 slat (usec): min=3, max=4543, avg=91.35, stdev=478.90 00:10:52.792 clat (usec): min=711, max=18356, avg=12245.32, stdev=1881.66 00:10:52.792 lat (usec): min=4245, max=19940, avg=12336.67, stdev=1914.36 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 7767], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10814], 00:10:52.792 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:10:52.792 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14615], 95.00th=[15533], 00:10:52.792 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:10:52.792 | 99.99th=[18482] 00:10:52.792 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:52.792 slat (usec): min=4, max=19486, avg=98.66, stdev=564.61 00:10:52.792 clat (usec): min=7238, max=45377, avg=13271.01, stdev=4291.84 00:10:52.792 lat (usec): min=7248, max=45397, avg=13369.67, stdev=4322.12 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:10:52.792 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12911], 60.00th=[13698], 00:10:52.792 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14484], 95.00th=[16319], 00:10:52.792 | 99.00th=[38011], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.792 | 99.99th=[45351] 00:10:52.792 bw ( KiB/s): min=20480, max=20480, per=32.14%, avg=20480.00, stdev= 0.00, samples=2 00:10:52.792 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:52.792 lat (usec) : 750=0.01% 00:10:52.792 lat (msec) : 10=8.61%, 20=89.67%, 50=1.71% 00:10:52.792 cpu : usr=7.09%, sys=12.39%, ctx=411, majf=0, minf=1 00:10:52.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:52.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.792 issued rwts: total=4776,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.792 job1: (groupid=0, jobs=1): err= 0: pid=156175: Mon Oct 14 13:21:44 2024 00:10:52.792 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:10:52.792 slat (usec): min=2, max=13701, avg=142.25, stdev=921.23 00:10:52.792 clat (usec): min=6043, max=38434, avg=17602.94, stdev=4909.92 00:10:52.792 lat (usec): min=6059, max=38442, avg=17745.19, stdev=4986.86 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[12518], 20.00th=[12649], 00:10:52.792 | 30.00th=[14746], 40.00th=[15401], 50.00th=[16909], 60.00th=[18482], 00:10:52.792 | 70.00th=[19530], 80.00th=[20841], 90.00th=[24773], 95.00th=[25822], 00:10:52.792 | 99.00th=[32637], 99.50th=[36439], 99.90th=[38536], 99.95th=[38536], 00:10:52.792 | 99.99th=[38536] 00:10:52.792 write: IOPS=2722, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1008msec); 0 zone resets 00:10:52.792 slat (usec): min=3, max=13706, avg=212.85, stdev=937.07 00:10:52.792 clat (msec): min=2, max=102, avg=30.16, stdev=17.72 00:10:52.792 lat (msec): min=3, max=102, avg=30.37, stdev=17.81 00:10:52.792 clat percentiles (msec): 00:10:52.792 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 14], 00:10:52.792 | 30.00th=[ 16], 40.00th=[ 26], 50.00th=[ 31], 60.00th=[ 34], 00:10:52.792 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 70], 00:10:52.792 | 99.00th=[ 90], 99.50th=[ 92], 99.90th=[ 103], 99.95th=[ 103], 00:10:52.792 | 99.99th=[ 103] 00:10:52.792 bw ( KiB/s): min= 8648, max=12288, per=16.43%, avg=10468.00, stdev=2573.87, samples=2 00:10:52.792 iops : min= 2162, max= 3072, avg=2617.00, stdev=643.47, samples=2 00:10:52.792 lat (msec) : 4=0.11%, 10=3.36%, 20=50.62%, 50=41.59%, 100=4.22% 00:10:52.792 lat (msec) : 250=0.09% 00:10:52.792 cpu : usr=2.88%, sys=5.86%, ctx=336, majf=0, minf=1 00:10:52.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:52.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.792 issued rwts: total=2560,2744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.792 job2: (groupid=0, jobs=1): err= 0: pid=156176: Mon Oct 14 13:21:44 2024 00:10:52.792 read: IOPS=3837, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1006msec) 00:10:52.792 slat (usec): min=2, max=14460, avg=121.43, stdev=854.25 00:10:52.792 clat (usec): min=4295, max=81000, avg=15828.43, stdev=10209.55 00:10:52.792 lat (usec): min=4519, max=81037, avg=15949.86, stdev=10278.49 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11994], 00:10:52.792 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12780], 60.00th=[13566], 00:10:52.792 | 70.00th=[15008], 80.00th=[17957], 90.00th=[22152], 95.00th=[28705], 00:10:52.792 | 99.00th=[70779], 99.50th=[76022], 99.90th=[76022], 99.95th=[78119], 00:10:52.792 | 99.99th=[81265] 00:10:52.792 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:10:52.792 slat (usec): min=3, max=15817, avg=113.19, stdev=842.13 00:10:52.792 clat (usec): min=1051, max=55845, avg=16182.04, stdev=7404.40 00:10:52.792 lat (usec): min=3332, max=55890, avg=16295.22, stdev=7470.56 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 4293], 5.00th=[ 8455], 10.00th=[11338], 20.00th=[11994], 00:10:52.792 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13566], 00:10:52.792 | 70.00th=[16909], 80.00th=[19792], 90.00th=[27657], 95.00th=[29492], 00:10:52.792 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:10:52.792 | 99.99th=[55837] 00:10:52.792 bw ( KiB/s): min=15512, max=17256, per=25.71%, avg=16384.00, stdev=1233.19, samples=2 00:10:52.792 iops : min= 3878, max= 4314, avg=4096.00, stdev=308.30, samples=2 00:10:52.792 lat (msec) : 2=0.01%, 4=0.43%, 10=7.64%, 20=75.92%, 50=14.47% 00:10:52.792 lat (msec) : 100=1.53% 00:10:52.792 cpu : usr=4.28%, sys=8.46%, ctx=243, majf=0, minf=1 00:10:52.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:52.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.792 issued rwts: total=3861,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.792 job3: (groupid=0, jobs=1): err= 0: pid=156177: Mon Oct 14 13:21:44 2024 00:10:52.792 read: IOPS=3655, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1004msec) 00:10:52.792 slat (usec): min=2, max=9221, avg=117.84, stdev=619.09 00:10:52.792 clat (usec): min=2890, max=29079, avg=15002.82, stdev=3825.37 00:10:52.792 lat (usec): min=2904, max=33777, avg=15120.67, stdev=3865.46 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 4555], 5.00th=[10159], 10.00th=[11469], 20.00th=[12649], 00:10:52.792 | 30.00th=[13042], 40.00th=[13304], 50.00th=[14353], 60.00th=[15008], 00:10:52.792 | 70.00th=[15664], 80.00th=[19006], 90.00th=[20579], 95.00th=[21627], 00:10:52.792 | 99.00th=[24249], 99.50th=[25297], 99.90th=[27657], 99.95th=[28967], 00:10:52.792 | 99.99th=[28967] 00:10:52.792 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:52.792 slat (usec): min=3, max=12009, avg=125.43, stdev=730.22 00:10:52.792 clat (usec): min=4138, max=62952, avg=17502.31, stdev=9957.13 00:10:52.792 lat (usec): min=4155, max=62964, avg=17627.74, stdev=10018.27 00:10:52.792 clat percentiles (usec): 00:10:52.792 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[12125], 00:10:52.792 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:10:52.792 | 70.00th=[16909], 80.00th=[21365], 90.00th=[28967], 95.00th=[35914], 00:10:52.792 | 99.00th=[62653], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:10:52.792 | 99.99th=[63177] 00:10:52.792 bw ( KiB/s): min=14080, max=18360, per=25.46%, avg=16220.00, stdev=3026.42, samples=2 00:10:52.792 iops : min= 3520, max= 4590, avg=4055.00, stdev=756.60, samples=2 00:10:52.792 lat (msec) : 4=0.39%, 10=5.81%, 20=77.83%, 50=14.49%, 100=1.49% 00:10:52.792 cpu : usr=4.99%, sys=7.28%, ctx=388, majf=0, minf=1 00:10:52.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:52.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.792 issued rwts: total=3670,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.792 00:10:52.792 Run status group 0 (all jobs): 00:10:52.792 READ: bw=57.6MiB/s (60.4MB/s), 9.92MiB/s-18.6MiB/s (10.4MB/s-19.5MB/s), io=58.1MiB (60.9MB), run=1002-1008msec 00:10:52.792 WRITE: bw=62.2MiB/s (65.2MB/s), 10.6MiB/s-20.0MiB/s (11.1MB/s-20.9MB/s), io=62.7MiB (65.8MB), run=1002-1008msec 00:10:52.792 00:10:52.792 Disk stats (read/write): 00:10:52.792 nvme0n1: ios=4128/4415, merge=0/0, ticks=16457/17296, in_queue=33753, util=98.00% 00:10:52.792 nvme0n2: ios=2097/2423, merge=0/0, ticks=29743/65509, in_queue=95252, util=87.80% 00:10:52.792 nvme0n3: ios=3501/3584, merge=0/0, ticks=34262/30437, in_queue=64699, util=99.69% 00:10:52.792 nvme0n4: ios=3117/3251, merge=0/0, ticks=18991/23757, in_queue=42748, util=91.17% 00:10:52.792 13:21:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:52.792 [global] 00:10:52.792 thread=1 00:10:52.792 invalidate=1 00:10:52.792 rw=randwrite 00:10:52.792 time_based=1 00:10:52.792 runtime=1 00:10:52.792 ioengine=libaio 00:10:52.792 direct=1 00:10:52.792 bs=4096 00:10:52.792 iodepth=128 00:10:52.792 norandommap=0 00:10:52.792 numjobs=1 00:10:52.792 00:10:52.792 verify_dump=1 00:10:52.792 verify_backlog=512 00:10:52.792 verify_state_save=0 00:10:52.792 do_verify=1 00:10:52.792 verify=crc32c-intel 00:10:52.792 [job0] 00:10:52.792 filename=/dev/nvme0n1 00:10:52.792 [job1] 00:10:52.792 filename=/dev/nvme0n2 00:10:52.792 [job2] 00:10:52.792 filename=/dev/nvme0n3 00:10:52.792 [job3] 00:10:52.792 filename=/dev/nvme0n4 00:10:52.792 Could not set queue depth (nvme0n1) 00:10:52.792 Could not set queue depth (nvme0n2) 00:10:52.792 Could not set queue depth (nvme0n3) 00:10:52.792 Could not set queue depth (nvme0n4) 00:10:52.792 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.792 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.792 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.793 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.793 fio-3.35 00:10:52.793 Starting 4 threads 00:10:54.168 00:10:54.168 job0: (groupid=0, jobs=1): err= 0: pid=156401: Mon Oct 14 13:21:45 2024 00:10:54.168 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:10:54.168 slat (usec): min=3, max=27262, avg=156.62, stdev=1206.60 00:10:54.168 clat (usec): min=9271, max=98269, avg=21125.29, stdev=15784.94 00:10:54.168 lat (usec): min=9289, max=98306, avg=21281.91, stdev=15914.02 00:10:54.168 clat percentiles (usec): 00:10:54.168 | 1.00th=[10028], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:10:54.168 | 30.00th=[12649], 40.00th=[14091], 50.00th=[16712], 60.00th=[17957], 00:10:54.168 | 70.00th=[18220], 80.00th=[18744], 90.00th=[42206], 95.00th=[60556], 00:10:54.168 | 99.00th=[79168], 99.50th=[84411], 99.90th=[84411], 99.95th=[98042], 00:10:54.168 | 99.99th=[98042] 00:10:54.168 write: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1009msec); 0 zone resets 00:10:54.168 slat (usec): min=4, max=27800, avg=145.79, stdev=1062.16 00:10:54.168 clat (usec): min=589, max=75073, avg=18117.71, stdev=10797.83 00:10:54.168 lat (usec): min=8569, max=75124, avg=18263.50, stdev=10897.96 00:10:54.168 clat percentiles (usec): 00:10:54.168 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11600], 20.00th=[11994], 00:10:54.168 | 30.00th=[12256], 40.00th=[13042], 50.00th=[14353], 60.00th=[15008], 00:10:54.168 | 70.00th=[15533], 80.00th=[17957], 90.00th=[36439], 95.00th=[46400], 00:10:54.168 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61604], 99.95th=[63177], 00:10:54.168 | 99.99th=[74974] 00:10:54.168 bw ( KiB/s): min= 9784, max=16384, per=19.43%, avg=13084.00, stdev=4666.90, samples=2 00:10:54.168 iops : min= 2446, max= 4096, avg=3271.00, stdev=1166.73, samples=2 00:10:54.168 lat (usec) : 750=0.02% 00:10:54.168 lat (msec) : 10=1.36%, 20=79.86%, 50=14.19%, 100=4.57% 00:10:54.168 cpu : usr=4.27%, sys=6.05%, ctx=258, majf=0, minf=1 00:10:54.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:54.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.168 issued rwts: total=3072,3399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.168 job1: (groupid=0, jobs=1): err= 0: pid=156402: Mon Oct 14 13:21:45 2024 00:10:54.168 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:54.168 slat (usec): min=2, max=9257, avg=92.97, stdev=489.01 00:10:54.168 clat (usec): min=7080, max=19990, avg=11969.55, stdev=1472.26 00:10:54.168 lat (usec): min=7085, max=19995, avg=12062.52, stdev=1471.09 00:10:54.168 clat percentiles (usec): 00:10:54.168 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:10:54.168 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:10:54.168 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13698], 95.00th=[14222], 00:10:54.168 | 99.00th=[16450], 99.50th=[18744], 99.90th=[20055], 99.95th=[20055], 00:10:54.168 | 99.99th=[20055] 00:10:54.168 write: IOPS=5478, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1003msec); 0 zone resets 00:10:54.168 slat (usec): min=3, max=7189, avg=89.36, stdev=442.00 00:10:54.168 clat (usec): min=682, max=34505, avg=11934.65, stdev=2164.12 00:10:54.168 lat (usec): min=5872, max=34515, avg=12024.01, stdev=2159.70 00:10:54.168 clat percentiles (usec): 00:10:54.168 | 1.00th=[ 6325], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10814], 00:10:54.168 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:10:54.168 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13566], 95.00th=[14615], 00:10:54.168 | 99.00th=[17957], 99.50th=[25035], 99.90th=[31065], 99.95th=[31065], 00:10:54.168 | 99.99th=[34341] 00:10:54.168 bw ( KiB/s): min=20480, max=22456, per=31.88%, avg=21468.00, stdev=1397.24, samples=2 00:10:54.168 iops : min= 5120, max= 5614, avg=5367.00, stdev=349.31, samples=2 00:10:54.168 lat (usec) : 750=0.01% 00:10:54.168 lat (msec) : 10=9.96%, 20=89.58%, 50=0.45% 00:10:54.168 cpu : usr=3.99%, sys=6.59%, ctx=590, majf=0, minf=1 00:10:54.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:54.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.168 issued rwts: total=5120,5495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.168 job2: (groupid=0, jobs=1): err= 0: pid=156403: Mon Oct 14 13:21:45 2024 00:10:54.168 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:54.168 slat (usec): min=3, max=8327, avg=118.03, stdev=688.50 00:10:54.168 clat (usec): min=8766, max=24075, avg=14971.85, stdev=2443.94 00:10:54.168 lat (usec): min=8789, max=24108, avg=15089.88, stdev=2516.74 00:10:54.168 clat percentiles (usec): 00:10:54.168 | 1.00th=[ 9372], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:10:54.168 | 30.00th=[13304], 40.00th=[14222], 50.00th=[14877], 60.00th=[15401], 00:10:54.169 | 70.00th=[15926], 80.00th=[16581], 90.00th=[17695], 95.00th=[19792], 00:10:54.169 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23987], 99.95th=[23987], 00:10:54.169 | 99.99th=[23987] 00:10:54.169 write: IOPS=4479, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1006msec); 0 zone resets 00:10:54.169 slat (usec): min=4, max=7023, avg=104.58, stdev=507.24 00:10:54.169 clat (usec): min=5765, max=25238, avg=14652.32, stdev=2534.25 00:10:54.169 lat (usec): min=5774, max=25252, avg=14756.91, stdev=2558.12 00:10:54.169 clat percentiles (usec): 00:10:54.169 | 1.00th=[ 8094], 5.00th=[10683], 10.00th=[12125], 20.00th=[12780], 00:10:54.169 | 30.00th=[13042], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:10:54.169 | 70.00th=[15795], 80.00th=[16909], 90.00th=[17433], 95.00th=[18220], 00:10:54.169 | 99.00th=[22152], 99.50th=[23200], 99.90th=[25297], 99.95th=[25297], 00:10:54.169 | 99.99th=[25297] 00:10:54.169 bw ( KiB/s): min=16384, max=18648, per=26.02%, avg=17516.00, stdev=1600.89, samples=2 00:10:54.169 iops : min= 4096, max= 4662, avg=4379.00, stdev=400.22, samples=2 00:10:54.169 lat (msec) : 10=2.64%, 20=93.64%, 50=3.72% 00:10:54.169 cpu : usr=7.66%, sys=7.66%, ctx=483, majf=0, minf=1 00:10:54.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:54.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.169 issued rwts: total=4096,4506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.169 job3: (groupid=0, jobs=1): err= 0: pid=156404: Mon Oct 14 13:21:45 2024 00:10:54.169 read: IOPS=3409, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1009msec) 00:10:54.169 slat (usec): min=3, max=15442, avg=135.02, stdev=867.30 00:10:54.169 clat (usec): min=5339, max=48235, avg=16542.27, stdev=5465.52 00:10:54.169 lat (usec): min=7524, max=48251, avg=16677.29, stdev=5533.35 00:10:54.169 clat percentiles (usec): 00:10:54.169 | 1.00th=[ 7963], 5.00th=[11600], 10.00th=[12387], 20.00th=[13829], 00:10:54.169 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:10:54.169 | 70.00th=[15795], 80.00th=[17171], 90.00th=[21365], 95.00th=[27132], 00:10:54.169 | 99.00th=[42730], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:10:54.169 | 99.99th=[48497] 00:10:54.169 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:10:54.169 slat (usec): min=4, max=13461, avg=134.83, stdev=759.66 00:10:54.169 clat (usec): min=2116, max=50078, avg=19812.50, stdev=10227.06 00:10:54.169 lat (usec): min=2126, max=50087, avg=19947.32, stdev=10300.31 00:10:54.169 clat percentiles (usec): 00:10:54.169 | 1.00th=[ 2966], 5.00th=[ 8291], 10.00th=[11731], 20.00th=[13566], 00:10:54.169 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15401], 60.00th=[16057], 00:10:54.169 | 70.00th=[17957], 80.00th=[31589], 90.00th=[36963], 95.00th=[39584], 00:10:54.169 | 99.00th=[45351], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:10:54.169 | 99.99th=[50070] 00:10:54.169 bw ( KiB/s): min=12240, max=16432, per=21.29%, avg=14336.00, stdev=2964.19, samples=2 00:10:54.169 iops : min= 3060, max= 4108, avg=3584.00, stdev=741.05, samples=2 00:10:54.169 lat (msec) : 4=1.17%, 10=4.17%, 20=72.95%, 50=21.70%, 100=0.01% 00:10:54.169 cpu : usr=3.87%, sys=7.44%, ctx=317, majf=0, minf=1 00:10:54.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:54.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.169 issued rwts: total=3440,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.169 00:10:54.169 Run status group 0 (all jobs): 00:10:54.169 READ: bw=60.9MiB/s (63.8MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=61.4MiB (64.4MB), run=1003-1009msec 00:10:54.169 WRITE: bw=65.8MiB/s (68.9MB/s), 13.2MiB/s-21.4MiB/s (13.8MB/s-22.4MB/s), io=66.3MiB (69.6MB), run=1003-1009msec 00:10:54.169 00:10:54.169 Disk stats (read/write): 00:10:54.169 nvme0n1: ios=2603/2937, merge=0/0, ticks=17966/16525, in_queue=34491, util=99.90% 00:10:54.169 nvme0n2: ios=4330/4608, merge=0/0, ticks=18070/19126, in_queue=37196, util=96.85% 00:10:54.169 nvme0n3: ios=3607/3702, merge=0/0, ticks=27328/25151, in_queue=52479, util=96.77% 00:10:54.169 nvme0n4: ios=3129/3175, merge=0/0, ticks=35415/43886, in_queue=79301, util=98.11% 00:10:54.169 13:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:54.169 13:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=156546 00:10:54.169 13:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:54.169 13:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:54.169 [global] 00:10:54.169 thread=1 00:10:54.169 invalidate=1 00:10:54.169 rw=read 00:10:54.169 time_based=1 00:10:54.169 runtime=10 00:10:54.169 ioengine=libaio 00:10:54.169 direct=1 00:10:54.169 bs=4096 00:10:54.169 iodepth=1 00:10:54.169 norandommap=1 00:10:54.169 numjobs=1 00:10:54.169 00:10:54.169 [job0] 00:10:54.169 filename=/dev/nvme0n1 00:10:54.169 [job1] 00:10:54.169 filename=/dev/nvme0n2 00:10:54.169 [job2] 00:10:54.169 filename=/dev/nvme0n3 00:10:54.169 [job3] 00:10:54.169 filename=/dev/nvme0n4 00:10:54.169 Could not set queue depth (nvme0n1) 00:10:54.169 Could not set queue depth (nvme0n2) 00:10:54.169 Could not set queue depth (nvme0n3) 00:10:54.169 Could not set queue depth (nvme0n4) 00:10:54.169 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.169 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.169 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.169 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.169 fio-3.35 00:10:54.169 Starting 4 threads 00:10:57.450 13:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:57.450 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:57.450 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1880064, buflen=4096 00:10:57.450 fio: pid=156739, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.707 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.707 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:57.707 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:10:57.707 fio: pid=156727, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.965 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.965 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:57.965 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6225920, buflen=4096 00:10:57.965 fio: pid=156678, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.223 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.223 13:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:58.223 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2101248, buflen=4096 00:10:58.223 fio: pid=156696, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.223 00:10:58.223 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156678: Mon Oct 14 13:21:49 2024 00:10:58.223 read: IOPS=435, BW=1743KiB/s (1784kB/s)(6080KiB/3489msec) 00:10:58.223 slat (usec): min=4, max=30902, avg=34.35, stdev=806.46 00:10:58.223 clat (usec): min=170, max=42140, avg=2243.09, stdev=8859.15 00:10:58.223 lat (usec): min=174, max=72953, avg=2277.46, stdev=9004.44 00:10:58.223 clat percentiles (usec): 00:10:58.223 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:10:58.223 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:10:58.223 | 70.00th=[ 239], 80.00th=[ 351], 90.00th=[ 445], 95.00th=[ 506], 00:10:58.223 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.223 | 99.99th=[42206] 00:10:58.223 bw ( KiB/s): min= 120, max=11416, per=74.01%, avg=2009.33, stdev=4608.32, samples=6 00:10:58.223 iops : min= 30, max= 2854, avg=502.33, stdev=1152.08, samples=6 00:10:58.223 lat (usec) : 250=72.32%, 500=22.55%, 750=0.20% 00:10:58.223 lat (msec) : 50=4.87% 00:10:58.223 cpu : usr=0.17%, sys=0.57%, ctx=1523, majf=0, minf=1 00:10:58.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.223 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.223 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.223 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156696: Mon Oct 14 13:21:49 2024 00:10:58.223 read: IOPS=135, BW=542KiB/s (555kB/s)(2052KiB/3787msec) 00:10:58.223 slat (usec): min=4, max=1908, avg=13.97, stdev=83.98 00:10:58.223 clat (usec): min=178, max=42037, avg=7319.53, stdev=15558.45 00:10:58.223 lat (usec): min=187, max=42998, avg=7333.49, stdev=15570.08 00:10:58.223 clat percentiles (usec): 00:10:58.223 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 235], 00:10:58.223 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:10:58.223 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[41157], 95.00th=[42206], 00:10:58.223 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.223 | 99.99th=[42206] 00:10:58.224 bw ( KiB/s): min= 96, max= 3464, per=21.29%, avg=578.86, stdev=1272.23, samples=7 00:10:58.224 iops : min= 24, max= 866, avg=144.71, stdev=318.06, samples=7 00:10:58.224 lat (usec) : 250=48.25%, 500=34.44% 00:10:58.224 lat (msec) : 50=17.12% 00:10:58.224 cpu : usr=0.05%, sys=0.16%, ctx=517, majf=0, minf=2 00:10:58.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 issued rwts: total=514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.224 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156727: Mon Oct 14 13:21:49 2024 00:10:58.224 read: IOPS=24, BW=97.2KiB/s (99.5kB/s)(312KiB/3211msec) 00:10:58.224 slat (usec): min=12, max=12877, avg=181.06, stdev=1446.75 00:10:58.224 clat (usec): min=368, max=42050, avg=40688.33, stdev=4644.47 00:10:58.224 lat (usec): min=408, max=53989, avg=40871.53, stdev=4879.54 00:10:58.224 clat percentiles (usec): 00:10:58.224 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:58.224 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.224 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:58.224 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.224 | 99.99th=[42206] 00:10:58.224 bw ( KiB/s): min= 96, max= 104, per=3.57%, avg=97.33, stdev= 3.27, samples=6 00:10:58.224 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:10:58.224 lat (usec) : 500=1.27% 00:10:58.224 lat (msec) : 50=97.47% 00:10:58.224 cpu : usr=0.06%, sys=0.00%, ctx=80, majf=0, minf=2 00:10:58.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.224 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156739: Mon Oct 14 13:21:49 2024 00:10:58.224 read: IOPS=157, BW=630KiB/s (646kB/s)(1836KiB/2912msec) 00:10:58.224 slat (nsec): min=4911, max=59887, avg=11041.17, stdev=8361.05 00:10:58.224 clat (usec): min=183, max=42310, avg=6279.09, stdev=14565.43 00:10:58.224 lat (usec): min=188, max=42342, avg=6290.12, stdev=14568.36 00:10:58.224 clat percentiles (usec): 00:10:58.224 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:10:58.224 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:10:58.224 | 70.00th=[ 217], 80.00th=[ 281], 90.00th=[41157], 95.00th=[41157], 00:10:58.224 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.224 | 99.99th=[42206] 00:10:58.224 bw ( KiB/s): min= 96, max= 1224, per=11.90%, avg=323.00, stdev=503.68, samples=5 00:10:58.224 iops : min= 24, max= 306, avg=80.60, stdev=126.00, samples=5 00:10:58.224 lat (usec) : 250=75.00%, 500=9.78%, 750=0.22% 00:10:58.224 lat (msec) : 50=14.78% 00:10:58.224 cpu : usr=0.14%, sys=0.14%, ctx=460, majf=0, minf=1 00:10:58.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.224 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.224 00:10:58.224 Run status group 0 (all jobs): 00:10:58.224 READ: bw=2715KiB/s (2780kB/s), 97.2KiB/s-1743KiB/s (99.5kB/s-1784kB/s), io=10.0MiB (10.5MB), run=2912-3787msec 00:10:58.224 00:10:58.224 Disk stats (read/write): 00:10:58.224 nvme0n1: ios=1516/0, merge=0/0, ticks=3270/0, in_queue=3270, util=94.85% 00:10:58.224 nvme0n2: ios=509/0, merge=0/0, ticks=3585/0, in_queue=3585, util=96.46% 00:10:58.224 nvme0n3: ios=75/0, merge=0/0, ticks=3051/0, in_queue=3051, util=96.38% 00:10:58.224 nvme0n4: ios=458/0, merge=0/0, ticks=2843/0, in_queue=2843, util=96.74% 00:10:58.482 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.482 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:58.739 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.739 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:58.997 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.997 13:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:59.255 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.255 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:59.512 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:59.512 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 156546 00:10:59.512 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:59.512 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:59.771 nvmf hotplug test: fio failed as expected 00:10:59.771 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.029 rmmod nvme_tcp 00:11:00.029 rmmod nvme_fabrics 00:11:00.029 rmmod nvme_keyring 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 154501 ']' 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 154501 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 154501 ']' 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 154501 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 154501 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 154501' 00:11:00.029 killing process with pid 154501 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 154501 00:11:00.029 13:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 154501 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.288 13:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.817 00:11:02.817 real 0m24.059s 00:11:02.817 user 1m25.529s 00:11:02.817 sys 0m6.310s 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.817 ************************************ 00:11:02.817 END TEST nvmf_fio_target 00:11:02.817 ************************************ 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.817 ************************************ 00:11:02.817 START TEST nvmf_bdevio 00:11:02.817 ************************************ 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:02.817 * Looking for test storage... 00:11:02.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:02.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.817 --rc genhtml_branch_coverage=1 00:11:02.817 --rc genhtml_function_coverage=1 00:11:02.817 --rc genhtml_legend=1 00:11:02.817 --rc geninfo_all_blocks=1 00:11:02.817 --rc geninfo_unexecuted_blocks=1 00:11:02.817 00:11:02.817 ' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:02.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.817 --rc genhtml_branch_coverage=1 00:11:02.817 --rc genhtml_function_coverage=1 00:11:02.817 --rc genhtml_legend=1 00:11:02.817 --rc geninfo_all_blocks=1 00:11:02.817 --rc geninfo_unexecuted_blocks=1 00:11:02.817 00:11:02.817 ' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:02.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.817 --rc genhtml_branch_coverage=1 00:11:02.817 --rc genhtml_function_coverage=1 00:11:02.817 --rc genhtml_legend=1 00:11:02.817 --rc geninfo_all_blocks=1 00:11:02.817 --rc geninfo_unexecuted_blocks=1 00:11:02.817 00:11:02.817 ' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:02.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.817 --rc genhtml_branch_coverage=1 00:11:02.817 --rc genhtml_function_coverage=1 00:11:02.817 --rc genhtml_legend=1 00:11:02.817 --rc geninfo_all_blocks=1 00:11:02.817 --rc geninfo_unexecuted_blocks=1 00:11:02.817 00:11:02.817 ' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.817 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.818 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:04.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.721 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:04.722 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:04.722 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:04.722 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:11:04.722 00:11:04.722 --- 10.0.0.2 ping statistics --- 00:11:04.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.722 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:04.722 00:11:04.722 --- 10.0.0.1 ping statistics --- 00:11:04.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.722 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=159393 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 159393 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 159393 ']' 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.722 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.722 [2024-10-14 13:21:56.544318] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:11:04.722 [2024-10-14 13:21:56.544401] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.980 [2024-10-14 13:21:56.609283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.980 [2024-10-14 13:21:56.656752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.980 [2024-10-14 13:21:56.656804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.980 [2024-10-14 13:21:56.656833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.980 [2024-10-14 13:21:56.656845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.980 [2024-10-14 13:21:56.656855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.980 [2024-10-14 13:21:56.658508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.980 [2024-10-14 13:21:56.658531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:04.980 [2024-10-14 13:21:56.658591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:04.980 [2024-10-14 13:21:56.658594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.980 [2024-10-14 13:21:56.822375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.980 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.239 Malloc0 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.239 [2024-10-14 13:21:56.884371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:05.239 { 00:11:05.239 "params": { 00:11:05.239 "name": "Nvme$subsystem", 00:11:05.239 "trtype": "$TEST_TRANSPORT", 00:11:05.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.239 "adrfam": "ipv4", 00:11:05.239 "trsvcid": "$NVMF_PORT", 00:11:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.239 "hdgst": ${hdgst:-false}, 00:11:05.239 "ddgst": ${ddgst:-false} 00:11:05.239 }, 00:11:05.239 "method": "bdev_nvme_attach_controller" 00:11:05.239 } 00:11:05.239 EOF 00:11:05.239 )") 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:05.239 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:05.239 "params": { 00:11:05.239 "name": "Nvme1", 00:11:05.239 "trtype": "tcp", 00:11:05.239 "traddr": "10.0.0.2", 00:11:05.239 "adrfam": "ipv4", 00:11:05.239 "trsvcid": "4420", 00:11:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:05.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:05.239 "hdgst": false, 00:11:05.239 "ddgst": false 00:11:05.239 }, 00:11:05.239 "method": "bdev_nvme_attach_controller" 00:11:05.239 }' 00:11:05.239 [2024-10-14 13:21:56.934763] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:11:05.239 [2024-10-14 13:21:56.934828] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159427 ] 00:11:05.239 [2024-10-14 13:21:56.996208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.239 [2024-10-14 13:21:57.048956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.239 [2024-10-14 13:21:57.049010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.239 [2024-10-14 13:21:57.049013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.497 I/O targets: 00:11:05.497 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:05.497 00:11:05.497 00:11:05.497 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.497 http://cunit.sourceforge.net/ 00:11:05.497 00:11:05.497 00:11:05.497 Suite: bdevio tests on: Nvme1n1 00:11:05.497 Test: blockdev write read block ...passed 00:11:05.754 Test: blockdev write zeroes read block ...passed 00:11:05.754 Test: blockdev write zeroes read no split ...passed 00:11:05.754 Test: blockdev write zeroes read split ...passed 00:11:05.754 Test: blockdev write zeroes read split partial ...passed 00:11:05.754 Test: blockdev reset ...[2024-10-14 13:21:57.391847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:05.754 [2024-10-14 13:21:57.391967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f0b80 (9): Bad file descriptor 00:11:05.754 [2024-10-14 13:21:57.446054] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.754 passed 00:11:05.754 Test: blockdev write read 8 blocks ...passed 00:11:05.754 Test: blockdev write read size > 128k ...passed 00:11:05.754 Test: blockdev write read invalid size ...passed 00:11:05.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:05.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:05.754 Test: blockdev write read max offset ...passed 00:11:06.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.012 Test: blockdev writev readv 8 blocks ...passed 00:11:06.012 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.012 Test: blockdev writev readv block ...passed 00:11:06.012 Test: blockdev writev readv size > 128k ...passed 00:11:06.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.012 Test: blockdev comparev and writev ...[2024-10-14 13:21:57.660246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.660296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.660321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.660338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.660651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.660676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.660698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.660713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.661018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.661042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.661064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.661079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.661396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.661442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:06.012 [2024-10-14 13:21:57.661458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:06.012 passed 00:11:06.012 Test: blockdev nvme passthru rw ...passed 00:11:06.012 Test: blockdev nvme passthru vendor specific ...[2024-10-14 13:21:57.744389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.012 [2024-10-14 13:21:57.744416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.744550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.012 [2024-10-14 13:21:57.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.744716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.012 [2024-10-14 13:21:57.744739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:06.012 [2024-10-14 13:21:57.744873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:06.012 [2024-10-14 13:21:57.744897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:06.012 passed 00:11:06.012 Test: blockdev nvme admin passthru ...passed 00:11:06.012 Test: blockdev copy ...passed 00:11:06.012 00:11:06.012 Run Summary: Type Total Ran Passed Failed Inactive 00:11:06.012 suites 1 1 n/a 0 0 00:11:06.012 tests 23 23 23 0 0 00:11:06.012 asserts 152 152 152 0 n/a 00:11:06.012 00:11:06.013 Elapsed time = 1.058 seconds 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.269 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.269 rmmod nvme_tcp 00:11:06.269 rmmod nvme_fabrics 00:11:06.269 rmmod nvme_keyring 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 159393 ']' 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 159393 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 159393 ']' 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 159393 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159393 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159393' 00:11:06.269 killing process with pid 159393 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 159393 00:11:06.269 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 159393 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.526 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.056 00:11:09.056 real 0m6.207s 00:11:09.056 user 0m9.304s 00:11:09.056 sys 0m2.123s 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 ************************************ 00:11:09.056 END TEST nvmf_bdevio 00:11:09.056 ************************************ 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:09.056 00:11:09.056 real 3m54.909s 00:11:09.056 user 10m12.707s 00:11:09.056 sys 1m6.567s 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 ************************************ 00:11:09.056 END TEST nvmf_target_core 00:11:09.056 ************************************ 00:11:09.056 13:22:00 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.056 13:22:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:09.056 13:22:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.056 13:22:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 ************************************ 00:11:09.056 START TEST nvmf_target_extra 00:11:09.056 ************************************ 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:09.056 * Looking for test storage... 00:11:09.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.056 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.057 --rc genhtml_branch_coverage=1 00:11:09.057 --rc genhtml_function_coverage=1 00:11:09.057 --rc genhtml_legend=1 00:11:09.057 --rc geninfo_all_blocks=1 00:11:09.057 --rc geninfo_unexecuted_blocks=1 00:11:09.057 00:11:09.057 ' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.057 --rc genhtml_branch_coverage=1 00:11:09.057 --rc genhtml_function_coverage=1 00:11:09.057 --rc genhtml_legend=1 00:11:09.057 --rc geninfo_all_blocks=1 00:11:09.057 --rc geninfo_unexecuted_blocks=1 00:11:09.057 00:11:09.057 ' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.057 --rc genhtml_branch_coverage=1 00:11:09.057 --rc genhtml_function_coverage=1 00:11:09.057 --rc genhtml_legend=1 00:11:09.057 --rc geninfo_all_blocks=1 00:11:09.057 --rc geninfo_unexecuted_blocks=1 00:11:09.057 00:11:09.057 ' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.057 --rc genhtml_branch_coverage=1 00:11:09.057 --rc genhtml_function_coverage=1 00:11:09.057 --rc genhtml_legend=1 00:11:09.057 --rc geninfo_all_blocks=1 00:11:09.057 --rc geninfo_unexecuted_blocks=1 00:11:09.057 00:11:09.057 ' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.057 ************************************ 00:11:09.057 START TEST nvmf_example 00:11:09.057 ************************************ 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:09.057 * Looking for test storage... 00:11:09.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:09.057 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.058 --rc genhtml_branch_coverage=1 00:11:09.058 --rc genhtml_function_coverage=1 00:11:09.058 --rc genhtml_legend=1 00:11:09.058 --rc geninfo_all_blocks=1 00:11:09.058 --rc geninfo_unexecuted_blocks=1 00:11:09.058 00:11:09.058 ' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.058 --rc genhtml_branch_coverage=1 00:11:09.058 --rc genhtml_function_coverage=1 00:11:09.058 --rc genhtml_legend=1 00:11:09.058 --rc geninfo_all_blocks=1 00:11:09.058 --rc geninfo_unexecuted_blocks=1 00:11:09.058 00:11:09.058 ' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.058 --rc genhtml_branch_coverage=1 00:11:09.058 --rc genhtml_function_coverage=1 00:11:09.058 --rc genhtml_legend=1 00:11:09.058 --rc geninfo_all_blocks=1 00:11:09.058 --rc geninfo_unexecuted_blocks=1 00:11:09.058 00:11:09.058 ' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.058 --rc genhtml_branch_coverage=1 00:11:09.058 --rc genhtml_function_coverage=1 00:11:09.058 --rc genhtml_legend=1 00:11:09.058 --rc geninfo_all_blocks=1 00:11:09.058 --rc geninfo_unexecuted_blocks=1 00:11:09.058 00:11:09.058 ' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.058 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:11.588 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:11.588 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:11.588 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:11.588 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.588 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:11:11.589 00:11:11.589 --- 10.0.0.2 ping statistics --- 00:11:11.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.589 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:11:11.589 00:11:11.589 --- 10.0.0.1 ping statistics --- 00:11:11.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.589 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:11.589 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161676 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161676 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 161676 ']' 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:11.589 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:23.782 Initializing NVMe Controllers 00:11:23.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:23.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:23.782 Initialization complete. Launching workers. 00:11:23.782 ======================================================== 00:11:23.782 Latency(us) 00:11:23.782 Device Information : IOPS MiB/s Average min max 00:11:23.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14402.33 56.26 4445.19 887.19 15238.76 00:11:23.782 ======================================================== 00:11:23.782 Total : 14402.33 56.26 4445.19 887.19 15238.76 00:11:23.782 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.782 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.783 rmmod nvme_tcp 00:11:23.783 rmmod nvme_fabrics 00:11:23.783 rmmod nvme_keyring 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 161676 ']' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 161676 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 161676 ']' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 161676 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161676 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161676' 00:11:23.783 killing process with pid 161676 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 161676 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 161676 00:11:23.783 nvmf threads initialize successfully 00:11:23.783 bdev subsystem init successfully 00:11:23.783 created a nvmf target service 00:11:23.783 create targets's poll groups done 00:11:23.783 all subsystems of target started 00:11:23.783 nvmf target is running 00:11:23.783 all subsystems of target stopped 00:11:23.783 destroy targets's poll groups done 00:11:23.783 destroyed the nvmf target service 00:11:23.783 bdev subsystem finish successfully 00:11:23.783 nvmf threads destroy successfully 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.783 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.350 00:11:24.350 real 0m15.360s 00:11:24.350 user 0m41.306s 00:11:24.350 sys 0m3.763s 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.350 ************************************ 00:11:24.350 END TEST nvmf_example 00:11:24.350 ************************************ 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.350 13:22:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.350 ************************************ 00:11:24.350 START TEST nvmf_filesystem 00:11:24.350 ************************************ 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:24.350 * Looking for test storage... 00:11:24.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:24.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.350 --rc genhtml_branch_coverage=1 00:11:24.350 --rc genhtml_function_coverage=1 00:11:24.350 --rc genhtml_legend=1 00:11:24.350 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:24.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.350 --rc genhtml_branch_coverage=1 00:11:24.350 --rc genhtml_function_coverage=1 00:11:24.350 --rc genhtml_legend=1 00:11:24.350 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:24.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.350 --rc genhtml_branch_coverage=1 00:11:24.350 --rc genhtml_function_coverage=1 00:11:24.350 --rc genhtml_legend=1 00:11:24.350 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:24.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.350 --rc genhtml_branch_coverage=1 00:11:24.350 --rc genhtml_function_coverage=1 00:11:24.350 --rc genhtml_legend=1 00:11:24.350 --rc geninfo_all_blocks=1 00:11:24.350 --rc geninfo_unexecuted_blocks=1 00:11:24.350 00:11:24.350 ' 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:24.350 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:24.351 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:24.352 #define SPDK_CONFIG_H 00:11:24.352 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:24.352 #define SPDK_CONFIG_APPS 1 00:11:24.352 #define SPDK_CONFIG_ARCH native 00:11:24.352 #undef SPDK_CONFIG_ASAN 00:11:24.352 #undef SPDK_CONFIG_AVAHI 00:11:24.352 #undef SPDK_CONFIG_CET 00:11:24.352 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:24.352 #define SPDK_CONFIG_COVERAGE 1 00:11:24.352 #define SPDK_CONFIG_CROSS_PREFIX 00:11:24.352 #undef SPDK_CONFIG_CRYPTO 00:11:24.352 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:24.352 #undef SPDK_CONFIG_CUSTOMOCF 00:11:24.352 #undef SPDK_CONFIG_DAOS 00:11:24.352 #define SPDK_CONFIG_DAOS_DIR 00:11:24.352 #define SPDK_CONFIG_DEBUG 1 00:11:24.352 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:24.352 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.352 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:24.352 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.352 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:24.352 #undef SPDK_CONFIG_DPDK_UADK 00:11:24.352 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:24.352 #define SPDK_CONFIG_EXAMPLES 1 00:11:24.352 #undef SPDK_CONFIG_FC 00:11:24.352 #define SPDK_CONFIG_FC_PATH 00:11:24.352 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:24.352 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:24.352 #define SPDK_CONFIG_FSDEV 1 00:11:24.352 #undef SPDK_CONFIG_FUSE 00:11:24.352 #undef SPDK_CONFIG_FUZZER 00:11:24.352 #define SPDK_CONFIG_FUZZER_LIB 00:11:24.352 #undef SPDK_CONFIG_GOLANG 00:11:24.352 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:24.352 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:24.352 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:24.352 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:24.352 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:24.352 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:24.352 #undef SPDK_CONFIG_HAVE_LZ4 00:11:24.352 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:24.352 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:24.352 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:24.352 #define SPDK_CONFIG_IDXD 1 00:11:24.352 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:24.352 #undef SPDK_CONFIG_IPSEC_MB 00:11:24.352 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:24.352 #define SPDK_CONFIG_ISAL 1 00:11:24.352 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:24.352 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:24.352 #define SPDK_CONFIG_LIBDIR 00:11:24.352 #undef SPDK_CONFIG_LTO 00:11:24.352 #define SPDK_CONFIG_MAX_LCORES 128 00:11:24.352 #define SPDK_CONFIG_NVME_CUSE 1 00:11:24.352 #undef SPDK_CONFIG_OCF 00:11:24.352 #define SPDK_CONFIG_OCF_PATH 00:11:24.352 #define SPDK_CONFIG_OPENSSL_PATH 00:11:24.352 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:24.352 #define SPDK_CONFIG_PGO_DIR 00:11:24.352 #undef SPDK_CONFIG_PGO_USE 00:11:24.352 #define SPDK_CONFIG_PREFIX /usr/local 00:11:24.352 #undef SPDK_CONFIG_RAID5F 00:11:24.352 #undef SPDK_CONFIG_RBD 00:11:24.352 #define SPDK_CONFIG_RDMA 1 00:11:24.352 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:24.352 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:24.352 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:24.352 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:24.352 #define SPDK_CONFIG_SHARED 1 00:11:24.352 #undef SPDK_CONFIG_SMA 00:11:24.352 #define SPDK_CONFIG_TESTS 1 00:11:24.352 #undef SPDK_CONFIG_TSAN 00:11:24.352 #define SPDK_CONFIG_UBLK 1 00:11:24.352 #define SPDK_CONFIG_UBSAN 1 00:11:24.352 #undef SPDK_CONFIG_UNIT_TESTS 00:11:24.352 #undef SPDK_CONFIG_URING 00:11:24.352 #define SPDK_CONFIG_URING_PATH 00:11:24.352 #undef SPDK_CONFIG_URING_ZNS 00:11:24.352 #undef SPDK_CONFIG_USDT 00:11:24.352 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:24.352 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:24.352 #define SPDK_CONFIG_VFIO_USER 1 00:11:24.352 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:24.352 #define SPDK_CONFIG_VHOST 1 00:11:24.352 #define SPDK_CONFIG_VIRTIO 1 00:11:24.352 #undef SPDK_CONFIG_VTUNE 00:11:24.352 #define SPDK_CONFIG_VTUNE_DIR 00:11:24.352 #define SPDK_CONFIG_WERROR 1 00:11:24.352 #define SPDK_CONFIG_WPDK_DIR 00:11:24.352 #undef SPDK_CONFIG_XNVME 00:11:24.352 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:24.352 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:24.613 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.614 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 163260 ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 163260 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.gNLdcD 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.gNLdcD/tests/target /tmp/spdk.gNLdcD 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=54061846528 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988524032 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7926677504 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984228864 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375277568 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22429696 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993952768 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=311296 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:24.615 * Looking for test storage... 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=54061846528 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10141270016 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.615 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:24.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.616 --rc genhtml_branch_coverage=1 00:11:24.616 --rc genhtml_function_coverage=1 00:11:24.616 --rc genhtml_legend=1 00:11:24.616 --rc geninfo_all_blocks=1 00:11:24.616 --rc geninfo_unexecuted_blocks=1 00:11:24.616 00:11:24.616 ' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:24.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.616 --rc genhtml_branch_coverage=1 00:11:24.616 --rc genhtml_function_coverage=1 00:11:24.616 --rc genhtml_legend=1 00:11:24.616 --rc geninfo_all_blocks=1 00:11:24.616 --rc geninfo_unexecuted_blocks=1 00:11:24.616 00:11:24.616 ' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:24.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.616 --rc genhtml_branch_coverage=1 00:11:24.616 --rc genhtml_function_coverage=1 00:11:24.616 --rc genhtml_legend=1 00:11:24.616 --rc geninfo_all_blocks=1 00:11:24.616 --rc geninfo_unexecuted_blocks=1 00:11:24.616 00:11:24.616 ' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:24.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.616 --rc genhtml_branch_coverage=1 00:11:24.616 --rc genhtml_function_coverage=1 00:11:24.616 --rc genhtml_legend=1 00:11:24.616 --rc geninfo_all_blocks=1 00:11:24.616 --rc geninfo_unexecuted_blocks=1 00:11:24.616 00:11:24.616 ' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:24.616 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.617 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.147 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:11:27.148 00:11:27.148 --- 10.0.0.2 ping statistics --- 00:11:27.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.148 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:11:27.148 00:11:27.148 --- 10.0.0.1 ping statistics --- 00:11:27.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.148 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.148 ************************************ 00:11:27.148 START TEST nvmf_filesystem_no_in_capsule 00:11:27.148 ************************************ 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:27.148 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=164958 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 164958 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 164958 ']' 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 [2024-10-14 13:22:18.746829] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:11:27.149 [2024-10-14 13:22:18.746910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.149 [2024-10-14 13:22:18.818816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.149 [2024-10-14 13:22:18.868741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.149 [2024-10-14 13:22:18.868799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.149 [2024-10-14 13:22:18.868827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.149 [2024-10-14 13:22:18.868839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.149 [2024-10-14 13:22:18.868855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.149 [2024-10-14 13:22:18.870420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.149 [2024-10-14 13:22:18.870479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.149 [2024-10-14 13:22:18.870545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.149 [2024-10-14 13:22:18.870548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.149 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.407 [2024-10-14 13:22:19.019185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:27.407 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 Malloc1 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 [2024-10-14 13:22:19.214148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:27.408 { 00:11:27.408 "name": "Malloc1", 00:11:27.408 "aliases": [ 00:11:27.408 "bdedcb1e-ae35-43cd-a3ff-f16e2bbee407" 00:11:27.408 ], 00:11:27.408 "product_name": "Malloc disk", 00:11:27.408 "block_size": 512, 00:11:27.408 "num_blocks": 1048576, 00:11:27.408 "uuid": "bdedcb1e-ae35-43cd-a3ff-f16e2bbee407", 00:11:27.408 "assigned_rate_limits": { 00:11:27.408 "rw_ios_per_sec": 0, 00:11:27.408 "rw_mbytes_per_sec": 0, 00:11:27.408 "r_mbytes_per_sec": 0, 00:11:27.408 "w_mbytes_per_sec": 0 00:11:27.408 }, 00:11:27.408 "claimed": true, 00:11:27.408 "claim_type": "exclusive_write", 00:11:27.408 "zoned": false, 00:11:27.408 "supported_io_types": { 00:11:27.408 "read": true, 00:11:27.408 "write": true, 00:11:27.408 "unmap": true, 00:11:27.408 "flush": true, 00:11:27.408 "reset": true, 00:11:27.408 "nvme_admin": false, 00:11:27.408 "nvme_io": false, 00:11:27.408 "nvme_io_md": false, 00:11:27.408 "write_zeroes": true, 00:11:27.408 "zcopy": true, 00:11:27.408 "get_zone_info": false, 00:11:27.408 "zone_management": false, 00:11:27.408 "zone_append": false, 00:11:27.408 "compare": false, 00:11:27.408 "compare_and_write": false, 00:11:27.408 "abort": true, 00:11:27.408 "seek_hole": false, 00:11:27.408 "seek_data": false, 00:11:27.408 "copy": true, 00:11:27.408 "nvme_iov_md": false 00:11:27.408 }, 00:11:27.408 "memory_domains": [ 00:11:27.408 { 00:11:27.408 "dma_device_id": "system", 00:11:27.408 "dma_device_type": 1 00:11:27.408 }, 00:11:27.408 { 00:11:27.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.408 "dma_device_type": 2 00:11:27.408 } 00:11:27.408 ], 00:11:27.408 "driver_specific": {} 00:11:27.408 } 00:11:27.408 ]' 00:11:27.408 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:27.666 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.231 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.231 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.231 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.231 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:28.231 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:30.130 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:30.388 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:30.953 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.887 ************************************ 00:11:31.887 START TEST filesystem_ext4 00:11:31.887 ************************************ 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:31.887 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:31.887 mke2fs 1.47.0 (5-Feb-2023) 00:11:32.145 Discarding device blocks: 0/522240 done 00:11:32.145 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:32.145 Filesystem UUID: 0310c73a-6814-4f63-a5df-1c702e00e819 00:11:32.145 Superblock backups stored on blocks: 00:11:32.145 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:32.145 00:11:32.145 Allocating group tables: 0/64 done 00:11:32.145 Writing inode tables: 0/64 done 00:11:32.145 Creating journal (8192 blocks): done 00:11:32.145 Writing superblocks and filesystem accounting information: 0/64 done 00:11:32.145 00:11:32.145 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:32.145 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164958 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.402 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.403 00:11:37.403 real 0m5.471s 00:11:37.403 user 0m0.015s 00:11:37.403 sys 0m0.094s 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.403 ************************************ 00:11:37.403 END TEST filesystem_ext4 00:11:37.403 ************************************ 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.403 ************************************ 00:11:37.403 START TEST filesystem_btrfs 00:11:37.403 ************************************ 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:37.403 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.659 btrfs-progs v6.8.1 00:11:37.659 See https://btrfs.readthedocs.io for more information. 00:11:37.659 00:11:37.659 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.659 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.659 this does not affect your deployments: 00:11:37.659 - DUP for metadata (-m dup) 00:11:37.659 - enabled no-holes (-O no-holes) 00:11:37.659 - enabled free-space-tree (-R free-space-tree) 00:11:37.659 00:11:37.659 Label: (null) 00:11:37.659 UUID: 24686a07-2228-40d1-a7ad-79e8b1f1f196 00:11:37.659 Node size: 16384 00:11:37.659 Sector size: 4096 (CPU page size: 4096) 00:11:37.659 Filesystem size: 510.00MiB 00:11:37.659 Block group profiles: 00:11:37.659 Data: single 8.00MiB 00:11:37.659 Metadata: DUP 32.00MiB 00:11:37.659 System: DUP 8.00MiB 00:11:37.659 SSD detected: yes 00:11:37.659 Zoned device: no 00:11:37.659 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.659 Checksum: crc32c 00:11:37.659 Number of devices: 1 00:11:37.659 Devices: 00:11:37.659 ID SIZE PATH 00:11:37.659 1 510.00MiB /dev/nvme0n1p1 00:11:37.659 00:11:37.659 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:37.659 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164958 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.591 00:11:38.591 real 0m1.285s 00:11:38.591 user 0m0.017s 00:11:38.591 sys 0m0.132s 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.591 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.591 ************************************ 00:11:38.591 END TEST filesystem_btrfs 00:11:38.591 ************************************ 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.849 ************************************ 00:11:38.849 START TEST filesystem_xfs 00:11:38.849 ************************************ 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:38.849 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:38.849 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:38.849 = sectsz=512 attr=2, projid32bit=1 00:11:38.849 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:38.849 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:38.849 data = bsize=4096 blocks=130560, imaxpct=25 00:11:38.849 = sunit=0 swidth=0 blks 00:11:38.849 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:38.849 log =internal log bsize=4096 blocks=16384, version=2 00:11:38.849 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:38.849 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:40.228 Discarding blocks...Done. 00:11:40.228 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:40.228 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.753 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.753 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:42.753 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164958 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.754 00:11:42.754 real 0m3.989s 00:11:42.754 user 0m0.014s 00:11:42.754 sys 0m0.105s 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:42.754 ************************************ 00:11:42.754 END TEST filesystem_xfs 00:11:42.754 ************************************ 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:42.754 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164958 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 164958 ']' 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 164958 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 164958 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 164958' 00:11:43.012 killing process with pid 164958 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 164958 00:11:43.012 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 164958 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:43.578 00:11:43.578 real 0m16.511s 00:11:43.578 user 1m3.884s 00:11:43.578 sys 0m2.317s 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.578 ************************************ 00:11:43.578 END TEST nvmf_filesystem_no_in_capsule 00:11:43.578 ************************************ 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.578 ************************************ 00:11:43.578 START TEST nvmf_filesystem_in_capsule 00:11:43.578 ************************************ 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=167120 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 167120 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 167120 ']' 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.578 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.579 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.579 [2024-10-14 13:22:35.299215] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:11:43.579 [2024-10-14 13:22:35.299289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.579 [2024-10-14 13:22:35.368354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.579 [2024-10-14 13:22:35.415388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.579 [2024-10-14 13:22:35.415451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.579 [2024-10-14 13:22:35.415480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.579 [2024-10-14 13:22:35.415492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.579 [2024-10-14 13:22:35.415502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.579 [2024-10-14 13:22:35.420151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.579 [2024-10-14 13:22:35.420216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.579 [2024-10-14 13:22:35.420288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.579 [2024-10-14 13:22:35.420285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.837 [2024-10-14 13:22:35.618856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.837 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 Malloc1 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 [2024-10-14 13:22:35.812840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:44.095 { 00:11:44.095 "name": "Malloc1", 00:11:44.095 "aliases": [ 00:11:44.095 "e0954e30-365a-4f60-8362-c281f0a84e22" 00:11:44.095 ], 00:11:44.095 "product_name": "Malloc disk", 00:11:44.095 "block_size": 512, 00:11:44.095 "num_blocks": 1048576, 00:11:44.095 "uuid": "e0954e30-365a-4f60-8362-c281f0a84e22", 00:11:44.095 "assigned_rate_limits": { 00:11:44.095 "rw_ios_per_sec": 0, 00:11:44.095 "rw_mbytes_per_sec": 0, 00:11:44.095 "r_mbytes_per_sec": 0, 00:11:44.095 "w_mbytes_per_sec": 0 00:11:44.095 }, 00:11:44.095 "claimed": true, 00:11:44.095 "claim_type": "exclusive_write", 00:11:44.095 "zoned": false, 00:11:44.095 "supported_io_types": { 00:11:44.095 "read": true, 00:11:44.095 "write": true, 00:11:44.095 "unmap": true, 00:11:44.095 "flush": true, 00:11:44.095 "reset": true, 00:11:44.095 "nvme_admin": false, 00:11:44.095 "nvme_io": false, 00:11:44.095 "nvme_io_md": false, 00:11:44.095 "write_zeroes": true, 00:11:44.095 "zcopy": true, 00:11:44.095 "get_zone_info": false, 00:11:44.095 "zone_management": false, 00:11:44.095 "zone_append": false, 00:11:44.095 "compare": false, 00:11:44.095 "compare_and_write": false, 00:11:44.095 "abort": true, 00:11:44.095 "seek_hole": false, 00:11:44.095 "seek_data": false, 00:11:44.095 "copy": true, 00:11:44.095 "nvme_iov_md": false 00:11:44.095 }, 00:11:44.095 "memory_domains": [ 00:11:44.095 { 00:11:44.095 "dma_device_id": "system", 00:11:44.095 "dma_device_type": 1 00:11:44.095 }, 00:11:44.095 { 00:11:44.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.095 "dma_device_type": 2 00:11:44.095 } 00:11:44.095 ], 00:11:44.095 "driver_specific": {} 00:11:44.095 } 00:11:44.095 ]' 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:44.095 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.027 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.027 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.027 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.027 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.027 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:46.924 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.182 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:47.441 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.374 ************************************ 00:11:48.374 START TEST filesystem_in_capsule_ext4 00:11:48.374 ************************************ 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:48.374 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:48.374 mke2fs 1.47.0 (5-Feb-2023) 00:11:48.632 Discarding device blocks: 0/522240 done 00:11:48.632 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:48.632 Filesystem UUID: 9b3d059c-1227-43f5-92fa-8da0231b8e2a 00:11:48.632 Superblock backups stored on blocks: 00:11:48.632 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:48.632 00:11:48.632 Allocating group tables: 0/64 done 00:11:48.632 Writing inode tables: 0/64 done 00:11:49.566 Creating journal (8192 blocks): done 00:11:49.566 Writing superblocks and filesystem accounting information: 0/64 done 00:11:49.566 00:11:49.566 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:49.566 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.123 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.123 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:56.123 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167120 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.124 00:11:56.124 real 0m6.732s 00:11:56.124 user 0m0.012s 00:11:56.124 sys 0m0.062s 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:56.124 ************************************ 00:11:56.124 END TEST filesystem_in_capsule_ext4 00:11:56.124 ************************************ 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.124 ************************************ 00:11:56.124 START TEST filesystem_in_capsule_btrfs 00:11:56.124 ************************************ 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:56.124 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:56.124 btrfs-progs v6.8.1 00:11:56.124 See https://btrfs.readthedocs.io for more information. 00:11:56.124 00:11:56.124 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:56.124 NOTE: several default settings have changed in version 5.15, please make sure 00:11:56.124 this does not affect your deployments: 00:11:56.124 - DUP for metadata (-m dup) 00:11:56.124 - enabled no-holes (-O no-holes) 00:11:56.124 - enabled free-space-tree (-R free-space-tree) 00:11:56.124 00:11:56.124 Label: (null) 00:11:56.124 UUID: eb2af937-0960-474c-8eec-da5afb05acf9 00:11:56.124 Node size: 16384 00:11:56.124 Sector size: 4096 (CPU page size: 4096) 00:11:56.124 Filesystem size: 510.00MiB 00:11:56.124 Block group profiles: 00:11:56.124 Data: single 8.00MiB 00:11:56.124 Metadata: DUP 32.00MiB 00:11:56.124 System: DUP 8.00MiB 00:11:56.124 SSD detected: yes 00:11:56.124 Zoned device: no 00:11:56.124 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:56.124 Checksum: crc32c 00:11:56.124 Number of devices: 1 00:11:56.124 Devices: 00:11:56.124 ID SIZE PATH 00:11:56.124 1 510.00MiB /dev/nvme0n1p1 00:11:56.124 00:11:56.124 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:56.124 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.124 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.124 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:56.382 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.382 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:56.382 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:56.382 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167120 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.382 00:11:56.382 real 0m1.091s 00:11:56.382 user 0m0.019s 00:11:56.382 sys 0m0.105s 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 ************************************ 00:11:56.382 END TEST filesystem_in_capsule_btrfs 00:11:56.382 ************************************ 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.382 ************************************ 00:11:56.382 START TEST filesystem_in_capsule_xfs 00:11:56.382 ************************************ 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:56.382 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.382 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.382 = sectsz=512 attr=2, projid32bit=1 00:11:56.382 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.382 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.382 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.382 = sunit=0 swidth=0 blks 00:11:56.382 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.382 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.382 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.382 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:57.317 Discarding blocks...Done. 00:11:57.317 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:57.317 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167120 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.856 00:11:59.856 real 0m3.567s 00:11:59.856 user 0m0.014s 00:11:59.856 sys 0m0.064s 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.856 ************************************ 00:11:59.856 END TEST filesystem_in_capsule_xfs 00:11:59.856 ************************************ 00:11:59.856 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:00.115 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167120 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 167120 ']' 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 167120 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167120 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167120' 00:12:00.116 killing process with pid 167120 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 167120 00:12:00.116 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 167120 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.683 00:12:00.683 real 0m17.034s 00:12:00.683 user 1m6.002s 00:12:00.683 sys 0m2.228s 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.683 ************************************ 00:12:00.683 END TEST nvmf_filesystem_in_capsule 00:12:00.683 ************************************ 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.683 rmmod nvme_tcp 00:12:00.683 rmmod nvme_fabrics 00:12:00.683 rmmod nvme_keyring 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.683 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.590 00:12:02.590 real 0m38.396s 00:12:02.590 user 2m10.978s 00:12:02.590 sys 0m6.268s 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.590 ************************************ 00:12:02.590 END TEST nvmf_filesystem 00:12:02.590 ************************************ 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.590 13:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.849 ************************************ 00:12:02.849 START TEST nvmf_target_discovery 00:12:02.849 ************************************ 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.849 * Looking for test storage... 00:12:02.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.849 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.850 --rc genhtml_branch_coverage=1 00:12:02.850 --rc genhtml_function_coverage=1 00:12:02.850 --rc genhtml_legend=1 00:12:02.850 --rc geninfo_all_blocks=1 00:12:02.850 --rc geninfo_unexecuted_blocks=1 00:12:02.850 00:12:02.850 ' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.850 --rc genhtml_branch_coverage=1 00:12:02.850 --rc genhtml_function_coverage=1 00:12:02.850 --rc genhtml_legend=1 00:12:02.850 --rc geninfo_all_blocks=1 00:12:02.850 --rc geninfo_unexecuted_blocks=1 00:12:02.850 00:12:02.850 ' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.850 --rc genhtml_branch_coverage=1 00:12:02.850 --rc genhtml_function_coverage=1 00:12:02.850 --rc genhtml_legend=1 00:12:02.850 --rc geninfo_all_blocks=1 00:12:02.850 --rc geninfo_unexecuted_blocks=1 00:12:02.850 00:12:02.850 ' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:02.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.850 --rc genhtml_branch_coverage=1 00:12:02.850 --rc genhtml_function_coverage=1 00:12:02.850 --rc genhtml_legend=1 00:12:02.850 --rc geninfo_all_blocks=1 00:12:02.850 --rc geninfo_unexecuted_blocks=1 00:12:02.850 00:12:02.850 ' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.850 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.385 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.385 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:05.385 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:12:05.386 00:12:05.386 --- 10.0.0.2 ping statistics --- 00:12:05.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.386 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:12:05.386 00:12:05.386 --- 10.0.0.1 ping statistics --- 00:12:05.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.386 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=171277 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 171277 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 171277 ']' 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.386 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 [2024-10-14 13:22:56.950916] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:12:05.386 [2024-10-14 13:22:56.951016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.386 [2024-10-14 13:22:57.015558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.386 [2024-10-14 13:22:57.059136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.386 [2024-10-14 13:22:57.059197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.386 [2024-10-14 13:22:57.059224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.386 [2024-10-14 13:22:57.059235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.386 [2024-10-14 13:22:57.059251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.386 [2024-10-14 13:22:57.060812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.386 [2024-10-14 13:22:57.060920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.386 [2024-10-14 13:22:57.061019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.386 [2024-10-14 13:22:57.061026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 [2024-10-14 13:22:57.198891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 Null1 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.386 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.645 [2024-10-14 13:22:57.243244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.645 Null2 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.645 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 Null3 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 Null4 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.646 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:05.905 00:12:05.905 Discovery Log Number of Records 6, Generation counter 6 00:12:05.905 =====Discovery Log Entry 0====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: current discovery subsystem 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4420 00:12:05.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: explicit discovery connections, duplicate discovery information 00:12:05.905 sectype: none 00:12:05.905 =====Discovery Log Entry 1====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: nvme subsystem 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4420 00:12:05.905 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: none 00:12:05.905 sectype: none 00:12:05.905 =====Discovery Log Entry 2====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: nvme subsystem 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4420 00:12:05.905 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: none 00:12:05.905 sectype: none 00:12:05.905 =====Discovery Log Entry 3====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: nvme subsystem 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4420 00:12:05.905 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: none 00:12:05.905 sectype: none 00:12:05.905 =====Discovery Log Entry 4====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: nvme subsystem 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4420 00:12:05.905 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: none 00:12:05.905 sectype: none 00:12:05.905 =====Discovery Log Entry 5====== 00:12:05.905 trtype: tcp 00:12:05.905 adrfam: ipv4 00:12:05.905 subtype: discovery subsystem referral 00:12:05.905 treq: not required 00:12:05.905 portid: 0 00:12:05.905 trsvcid: 4430 00:12:05.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.905 traddr: 10.0.0.2 00:12:05.905 eflags: none 00:12:05.905 sectype: none 00:12:05.905 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:05.905 Perform nvmf subsystem discovery via RPC 00:12:05.905 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:05.905 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.905 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.905 [ 00:12:05.905 { 00:12:05.905 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:05.905 "subtype": "Discovery", 00:12:05.905 "listen_addresses": [ 00:12:05.905 { 00:12:05.905 "trtype": "TCP", 00:12:05.905 "adrfam": "IPv4", 00:12:05.905 "traddr": "10.0.0.2", 00:12:05.905 "trsvcid": "4420" 00:12:05.905 } 00:12:05.905 ], 00:12:05.905 "allow_any_host": true, 00:12:05.905 "hosts": [] 00:12:05.905 }, 00:12:05.905 { 00:12:05.905 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.905 "subtype": "NVMe", 00:12:05.905 "listen_addresses": [ 00:12:05.905 { 00:12:05.905 "trtype": "TCP", 00:12:05.905 "adrfam": "IPv4", 00:12:05.905 "traddr": "10.0.0.2", 00:12:05.905 "trsvcid": "4420" 00:12:05.905 } 00:12:05.905 ], 00:12:05.905 "allow_any_host": true, 00:12:05.905 "hosts": [], 00:12:05.905 "serial_number": "SPDK00000000000001", 00:12:05.905 "model_number": "SPDK bdev Controller", 00:12:05.905 "max_namespaces": 32, 00:12:05.905 "min_cntlid": 1, 00:12:05.905 "max_cntlid": 65519, 00:12:05.905 "namespaces": [ 00:12:05.905 { 00:12:05.905 "nsid": 1, 00:12:05.905 "bdev_name": "Null1", 00:12:05.905 "name": "Null1", 00:12:05.905 "nguid": "4519C80F4FFB4F0FA28BD3569A40A4BD", 00:12:05.905 "uuid": "4519c80f-4ffb-4f0f-a28b-d3569a40a4bd" 00:12:05.905 } 00:12:05.905 ] 00:12:05.905 }, 00:12:05.905 { 00:12:05.905 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:05.905 "subtype": "NVMe", 00:12:05.905 "listen_addresses": [ 00:12:05.905 { 00:12:05.905 "trtype": "TCP", 00:12:05.905 "adrfam": "IPv4", 00:12:05.905 "traddr": "10.0.0.2", 00:12:05.905 "trsvcid": "4420" 00:12:05.905 } 00:12:05.905 ], 00:12:05.905 "allow_any_host": true, 00:12:05.905 "hosts": [], 00:12:05.905 "serial_number": "SPDK00000000000002", 00:12:05.905 "model_number": "SPDK bdev Controller", 00:12:05.905 "max_namespaces": 32, 00:12:05.905 "min_cntlid": 1, 00:12:05.905 "max_cntlid": 65519, 00:12:05.905 "namespaces": [ 00:12:05.905 { 00:12:05.905 "nsid": 1, 00:12:05.905 "bdev_name": "Null2", 00:12:05.905 "name": "Null2", 00:12:05.905 "nguid": "71E174376B0D489BBE4F238B04B02F73", 00:12:05.905 "uuid": "71e17437-6b0d-489b-be4f-238b04b02f73" 00:12:05.905 } 00:12:05.905 ] 00:12:05.905 }, 00:12:05.905 { 00:12:05.905 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:05.905 "subtype": "NVMe", 00:12:05.905 "listen_addresses": [ 00:12:05.905 { 00:12:05.905 "trtype": "TCP", 00:12:05.905 "adrfam": "IPv4", 00:12:05.905 "traddr": "10.0.0.2", 00:12:05.905 "trsvcid": "4420" 00:12:05.905 } 00:12:05.906 ], 00:12:05.906 "allow_any_host": true, 00:12:05.906 "hosts": [], 00:12:05.906 "serial_number": "SPDK00000000000003", 00:12:05.906 "model_number": "SPDK bdev Controller", 00:12:05.906 "max_namespaces": 32, 00:12:05.906 "min_cntlid": 1, 00:12:05.906 "max_cntlid": 65519, 00:12:05.906 "namespaces": [ 00:12:05.906 { 00:12:05.906 "nsid": 1, 00:12:05.906 "bdev_name": "Null3", 00:12:05.906 "name": "Null3", 00:12:05.906 "nguid": "68E29D958D274022BCC5ADD4844B22CE", 00:12:05.906 "uuid": "68e29d95-8d27-4022-bcc5-add4844b22ce" 00:12:05.906 } 00:12:05.906 ] 00:12:05.906 }, 00:12:05.906 { 00:12:05.906 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:05.906 "subtype": "NVMe", 00:12:05.906 "listen_addresses": [ 00:12:05.906 { 00:12:05.906 "trtype": "TCP", 00:12:05.906 "adrfam": "IPv4", 00:12:05.906 "traddr": "10.0.0.2", 00:12:05.906 "trsvcid": "4420" 00:12:05.906 } 00:12:05.906 ], 00:12:05.906 "allow_any_host": true, 00:12:05.906 "hosts": [], 00:12:05.906 "serial_number": "SPDK00000000000004", 00:12:05.906 "model_number": "SPDK bdev Controller", 00:12:05.906 "max_namespaces": 32, 00:12:05.906 "min_cntlid": 1, 00:12:05.906 "max_cntlid": 65519, 00:12:05.906 "namespaces": [ 00:12:05.906 { 00:12:05.906 "nsid": 1, 00:12:05.906 "bdev_name": "Null4", 00:12:05.906 "name": "Null4", 00:12:05.906 "nguid": "0B10D96B8E384730A7C874BF8FF763F5", 00:12:05.906 "uuid": "0b10d96b-8e38-4730-a7c8-74bf8ff763f5" 00:12:05.906 } 00:12:05.906 ] 00:12:05.906 } 00:12:05.906 ] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.906 rmmod nvme_tcp 00:12:05.906 rmmod nvme_fabrics 00:12:05.906 rmmod nvme_keyring 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 171277 ']' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 171277 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 171277 ']' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 171277 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.906 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171277 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171277' 00:12:06.164 killing process with pid 171277 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 171277 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 171277 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.164 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.715 00:12:08.715 real 0m5.573s 00:12:08.715 user 0m4.685s 00:12:08.715 sys 0m1.917s 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.715 ************************************ 00:12:08.715 END TEST nvmf_target_discovery 00:12:08.715 ************************************ 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.715 ************************************ 00:12:08.715 START TEST nvmf_referrals 00:12:08.715 ************************************ 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.715 * Looking for test storage... 00:12:08.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.715 --rc genhtml_branch_coverage=1 00:12:08.715 --rc genhtml_function_coverage=1 00:12:08.715 --rc genhtml_legend=1 00:12:08.715 --rc geninfo_all_blocks=1 00:12:08.715 --rc geninfo_unexecuted_blocks=1 00:12:08.715 00:12:08.715 ' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.715 --rc genhtml_branch_coverage=1 00:12:08.715 --rc genhtml_function_coverage=1 00:12:08.715 --rc genhtml_legend=1 00:12:08.715 --rc geninfo_all_blocks=1 00:12:08.715 --rc geninfo_unexecuted_blocks=1 00:12:08.715 00:12:08.715 ' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.715 --rc genhtml_branch_coverage=1 00:12:08.715 --rc genhtml_function_coverage=1 00:12:08.715 --rc genhtml_legend=1 00:12:08.715 --rc geninfo_all_blocks=1 00:12:08.715 --rc geninfo_unexecuted_blocks=1 00:12:08.715 00:12:08.715 ' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.715 --rc genhtml_branch_coverage=1 00:12:08.715 --rc genhtml_function_coverage=1 00:12:08.715 --rc genhtml_legend=1 00:12:08.715 --rc geninfo_all_blocks=1 00:12:08.715 --rc geninfo_unexecuted_blocks=1 00:12:08.715 00:12:08.715 ' 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.715 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.716 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:11.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:11.246 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:11.246 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.246 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:11.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:12:11.247 00:12:11.247 --- 10.0.0.2 ping statistics --- 00:12:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.247 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:12:11.247 00:12:11.247 --- 10.0.0.1 ping statistics --- 00:12:11.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.247 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=173376 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 173376 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 173376 ']' 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 [2024-10-14 13:23:02.725543] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:12:11.247 [2024-10-14 13:23:02.725627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.247 [2024-10-14 13:23:02.805001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.247 [2024-10-14 13:23:02.855698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.247 [2024-10-14 13:23:02.855749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.247 [2024-10-14 13:23:02.855763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.247 [2024-10-14 13:23:02.855774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.247 [2024-10-14 13:23:02.855784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.247 [2024-10-14 13:23:02.857378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.247 [2024-10-14 13:23:02.860152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.247 [2024-10-14 13:23:02.862186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.247 [2024-10-14 13:23:02.862188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 [2024-10-14 13:23:03.002743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 [2024-10-14 13:23:03.015025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.247 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:11.248 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:11.505 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.506 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.763 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:12.021 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.022 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.279 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.537 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:12.795 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:13.053 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.311 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.311 rmmod nvme_tcp 00:12:13.311 rmmod nvme_fabrics 00:12:13.311 rmmod nvme_keyring 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 173376 ']' 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 173376 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 173376 ']' 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 173376 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173376 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173376' 00:12:13.311 killing process with pid 173376 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 173376 00:12:13.311 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 173376 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.569 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.474 00:12:15.474 real 0m7.222s 00:12:15.474 user 0m11.100s 00:12:15.474 sys 0m2.466s 00:12:15.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.474 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.474 ************************************ 00:12:15.474 END TEST nvmf_referrals 00:12:15.474 ************************************ 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 ************************************ 00:12:15.734 START TEST nvmf_connect_disconnect 00:12:15.734 ************************************ 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:15.734 * Looking for test storage... 00:12:15.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.734 --rc genhtml_branch_coverage=1 00:12:15.734 --rc genhtml_function_coverage=1 00:12:15.734 --rc genhtml_legend=1 00:12:15.734 --rc geninfo_all_blocks=1 00:12:15.734 --rc geninfo_unexecuted_blocks=1 00:12:15.734 00:12:15.734 ' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.734 --rc genhtml_branch_coverage=1 00:12:15.734 --rc genhtml_function_coverage=1 00:12:15.734 --rc genhtml_legend=1 00:12:15.734 --rc geninfo_all_blocks=1 00:12:15.734 --rc geninfo_unexecuted_blocks=1 00:12:15.734 00:12:15.734 ' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.734 --rc genhtml_branch_coverage=1 00:12:15.734 --rc genhtml_function_coverage=1 00:12:15.734 --rc genhtml_legend=1 00:12:15.734 --rc geninfo_all_blocks=1 00:12:15.734 --rc geninfo_unexecuted_blocks=1 00:12:15.734 00:12:15.734 ' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.734 --rc genhtml_branch_coverage=1 00:12:15.734 --rc genhtml_function_coverage=1 00:12:15.734 --rc genhtml_legend=1 00:12:15.734 --rc geninfo_all_blocks=1 00:12:15.734 --rc geninfo_unexecuted_blocks=1 00:12:15.734 00:12:15.734 ' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.734 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.735 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:18.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.279 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:18.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:18.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:18.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:12:18.280 00:12:18.280 --- 10.0.0.2 ping statistics --- 00:12:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.280 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:12:18.280 00:12:18.280 --- 10.0.0.1 ping statistics --- 00:12:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.280 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=175686 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 175686 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 175686 ']' 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.280 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.280 [2024-10-14 13:23:09.989906] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:12:18.280 [2024-10-14 13:23:09.990009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.280 [2024-10-14 13:23:10.061278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.280 [2024-10-14 13:23:10.111878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.280 [2024-10-14 13:23:10.111945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.280 [2024-10-14 13:23:10.111969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.280 [2024-10-14 13:23:10.111979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.280 [2024-10-14 13:23:10.111988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.280 [2024-10-14 13:23:10.113616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.280 [2024-10-14 13:23:10.113692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.280 [2024-10-14 13:23:10.113756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.280 [2024-10-14 13:23:10.113753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 [2024-10-14 13:23:10.260974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:18.539 [2024-10-14 13:23:10.339304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:18.539 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:21.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.975 rmmod nvme_tcp 00:16:10.975 rmmod nvme_fabrics 00:16:10.975 rmmod nvme_keyring 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 175686 ']' 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 175686 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 175686 ']' 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 175686 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 175686 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 175686' 00:16:10.975 killing process with pid 175686 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 175686 00:16:10.975 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 175686 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.234 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.764 00:16:13.764 real 3m57.648s 00:16:13.764 user 15m4.128s 00:16:13.764 sys 0m36.032s 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:13.764 ************************************ 00:16:13.764 END TEST nvmf_connect_disconnect 00:16:13.764 ************************************ 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.764 ************************************ 00:16:13.764 START TEST nvmf_multitarget 00:16:13.764 ************************************ 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:13.764 * Looking for test storage... 00:16:13.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:13.764 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:13.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.765 --rc genhtml_branch_coverage=1 00:16:13.765 --rc genhtml_function_coverage=1 00:16:13.765 --rc genhtml_legend=1 00:16:13.765 --rc geninfo_all_blocks=1 00:16:13.765 --rc geninfo_unexecuted_blocks=1 00:16:13.765 00:16:13.765 ' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:13.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.765 --rc genhtml_branch_coverage=1 00:16:13.765 --rc genhtml_function_coverage=1 00:16:13.765 --rc genhtml_legend=1 00:16:13.765 --rc geninfo_all_blocks=1 00:16:13.765 --rc geninfo_unexecuted_blocks=1 00:16:13.765 00:16:13.765 ' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:13.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.765 --rc genhtml_branch_coverage=1 00:16:13.765 --rc genhtml_function_coverage=1 00:16:13.765 --rc genhtml_legend=1 00:16:13.765 --rc geninfo_all_blocks=1 00:16:13.765 --rc geninfo_unexecuted_blocks=1 00:16:13.765 00:16:13.765 ' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:13.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.765 --rc genhtml_branch_coverage=1 00:16:13.765 --rc genhtml_function_coverage=1 00:16:13.765 --rc genhtml_legend=1 00:16:13.765 --rc geninfo_all_blocks=1 00:16:13.765 --rc geninfo_unexecuted_blocks=1 00:16:13.765 00:16:13.765 ' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:13.765 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.766 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:15.666 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:15.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:15.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:15.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:15.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:15.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:16:15.667 00:16:15.667 --- 10.0.0.2 ping statistics --- 00:16:15.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.667 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:16:15.667 00:16:15.667 --- 10.0.0.1 ping statistics --- 00:16:15.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.667 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=207221 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 207221 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 207221 ']' 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.667 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:15.667 [2024-10-14 13:27:07.359217] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:16:15.667 [2024-10-14 13:27:07.359306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.667 [2024-10-14 13:27:07.431041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.667 [2024-10-14 13:27:07.478535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.667 [2024-10-14 13:27:07.478590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.667 [2024-10-14 13:27:07.478620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.667 [2024-10-14 13:27:07.478632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.667 [2024-10-14 13:27:07.478642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.667 [2024-10-14 13:27:07.480228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.667 [2024-10-14 13:27:07.480283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.667 [2024-10-14 13:27:07.480333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.667 [2024-10-14 13:27:07.480337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:15.925 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:16.183 "nvmf_tgt_1" 00:16:16.183 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:16.183 "nvmf_tgt_2" 00:16:16.183 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:16.183 13:27:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:16.440 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:16.440 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:16.440 true 00:16:16.440 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:16.697 true 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.697 rmmod nvme_tcp 00:16:16.697 rmmod nvme_fabrics 00:16:16.697 rmmod nvme_keyring 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 207221 ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 207221 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 207221 ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 207221 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 207221 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 207221' 00:16:16.697 killing process with pid 207221 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 207221 00:16:16.697 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 207221 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.954 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:19.500 00:16:19.500 real 0m5.708s 00:16:19.500 user 0m6.623s 00:16:19.500 sys 0m1.897s 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 ************************************ 00:16:19.500 END TEST nvmf_multitarget 00:16:19.500 ************************************ 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.500 ************************************ 00:16:19.500 START TEST nvmf_rpc 00:16:19.500 ************************************ 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:19.500 * Looking for test storage... 00:16:19.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:19.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.500 --rc genhtml_branch_coverage=1 00:16:19.500 --rc genhtml_function_coverage=1 00:16:19.500 --rc genhtml_legend=1 00:16:19.500 --rc geninfo_all_blocks=1 00:16:19.500 --rc geninfo_unexecuted_blocks=1 00:16:19.500 00:16:19.500 ' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:19.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.500 --rc genhtml_branch_coverage=1 00:16:19.500 --rc genhtml_function_coverage=1 00:16:19.500 --rc genhtml_legend=1 00:16:19.500 --rc geninfo_all_blocks=1 00:16:19.500 --rc geninfo_unexecuted_blocks=1 00:16:19.500 00:16:19.500 ' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:19.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.500 --rc genhtml_branch_coverage=1 00:16:19.500 --rc genhtml_function_coverage=1 00:16:19.500 --rc genhtml_legend=1 00:16:19.500 --rc geninfo_all_blocks=1 00:16:19.500 --rc geninfo_unexecuted_blocks=1 00:16:19.500 00:16:19.500 ' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:19.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.500 --rc genhtml_branch_coverage=1 00:16:19.500 --rc genhtml_function_coverage=1 00:16:19.500 --rc genhtml_legend=1 00:16:19.500 --rc geninfo_all_blocks=1 00:16:19.500 --rc geninfo_unexecuted_blocks=1 00:16:19.500 00:16:19.500 ' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.500 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:19.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:19.501 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:21.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:21.405 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:21.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:21.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:21.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:21.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:16:21.406 00:16:21.406 --- 10.0.0.2 ping statistics --- 00:16:21.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.406 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:16:21.406 00:16:21.406 --- 10.0.0.1 ping statistics --- 00:16:21.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.406 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=209717 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 209717 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 209717 ']' 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.406 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.406 [2024-10-14 13:27:12.956273] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:16:21.406 [2024-10-14 13:27:12.956350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.406 [2024-10-14 13:27:13.019661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.406 [2024-10-14 13:27:13.064371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.406 [2024-10-14 13:27:13.064429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.406 [2024-10-14 13:27:13.064452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.406 [2024-10-14 13:27:13.064462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.406 [2024-10-14 13:27:13.064472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.406 [2024-10-14 13:27:13.066096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.406 [2024-10-14 13:27:13.066162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.406 [2024-10-14 13:27:13.066229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.406 [2024-10-14 13:27:13.066231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.406 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:21.406 "tick_rate": 2700000000, 00:16:21.406 "poll_groups": [ 00:16:21.406 { 00:16:21.406 "name": "nvmf_tgt_poll_group_000", 00:16:21.406 "admin_qpairs": 0, 00:16:21.406 "io_qpairs": 0, 00:16:21.406 "current_admin_qpairs": 0, 00:16:21.406 "current_io_qpairs": 0, 00:16:21.406 "pending_bdev_io": 0, 00:16:21.406 "completed_nvme_io": 0, 00:16:21.406 "transports": [] 00:16:21.406 }, 00:16:21.406 { 00:16:21.406 "name": "nvmf_tgt_poll_group_001", 00:16:21.406 "admin_qpairs": 0, 00:16:21.406 "io_qpairs": 0, 00:16:21.406 "current_admin_qpairs": 0, 00:16:21.406 "current_io_qpairs": 0, 00:16:21.407 "pending_bdev_io": 0, 00:16:21.407 "completed_nvme_io": 0, 00:16:21.407 "transports": [] 00:16:21.407 }, 00:16:21.407 { 00:16:21.407 "name": "nvmf_tgt_poll_group_002", 00:16:21.407 "admin_qpairs": 0, 00:16:21.407 "io_qpairs": 0, 00:16:21.407 "current_admin_qpairs": 0, 00:16:21.407 "current_io_qpairs": 0, 00:16:21.407 "pending_bdev_io": 0, 00:16:21.407 "completed_nvme_io": 0, 00:16:21.407 "transports": [] 00:16:21.407 }, 00:16:21.407 { 00:16:21.407 "name": "nvmf_tgt_poll_group_003", 00:16:21.407 "admin_qpairs": 0, 00:16:21.407 "io_qpairs": 0, 00:16:21.407 "current_admin_qpairs": 0, 00:16:21.407 "current_io_qpairs": 0, 00:16:21.407 "pending_bdev_io": 0, 00:16:21.407 "completed_nvme_io": 0, 00:16:21.407 "transports": [] 00:16:21.407 } 00:16:21.407 ] 00:16:21.407 }' 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:21.407 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.665 [2024-10-14 13:27:13.284492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.665 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:21.665 "tick_rate": 2700000000, 00:16:21.665 "poll_groups": [ 00:16:21.665 { 00:16:21.665 "name": "nvmf_tgt_poll_group_000", 00:16:21.665 "admin_qpairs": 0, 00:16:21.665 "io_qpairs": 0, 00:16:21.665 "current_admin_qpairs": 0, 00:16:21.665 "current_io_qpairs": 0, 00:16:21.665 "pending_bdev_io": 0, 00:16:21.665 "completed_nvme_io": 0, 00:16:21.665 "transports": [ 00:16:21.665 { 00:16:21.665 "trtype": "TCP" 00:16:21.665 } 00:16:21.665 ] 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "name": "nvmf_tgt_poll_group_001", 00:16:21.665 "admin_qpairs": 0, 00:16:21.665 "io_qpairs": 0, 00:16:21.665 "current_admin_qpairs": 0, 00:16:21.665 "current_io_qpairs": 0, 00:16:21.665 "pending_bdev_io": 0, 00:16:21.665 "completed_nvme_io": 0, 00:16:21.665 "transports": [ 00:16:21.665 { 00:16:21.665 "trtype": "TCP" 00:16:21.665 } 00:16:21.665 ] 00:16:21.665 }, 00:16:21.665 { 00:16:21.665 "name": "nvmf_tgt_poll_group_002", 00:16:21.665 "admin_qpairs": 0, 00:16:21.665 "io_qpairs": 0, 00:16:21.665 "current_admin_qpairs": 0, 00:16:21.665 "current_io_qpairs": 0, 00:16:21.665 "pending_bdev_io": 0, 00:16:21.666 "completed_nvme_io": 0, 00:16:21.666 "transports": [ 00:16:21.666 { 00:16:21.666 "trtype": "TCP" 00:16:21.666 } 00:16:21.666 ] 00:16:21.666 }, 00:16:21.666 { 00:16:21.666 "name": "nvmf_tgt_poll_group_003", 00:16:21.666 "admin_qpairs": 0, 00:16:21.666 "io_qpairs": 0, 00:16:21.666 "current_admin_qpairs": 0, 00:16:21.666 "current_io_qpairs": 0, 00:16:21.666 "pending_bdev_io": 0, 00:16:21.666 "completed_nvme_io": 0, 00:16:21.666 "transports": [ 00:16:21.666 { 00:16:21.666 "trtype": "TCP" 00:16:21.666 } 00:16:21.666 ] 00:16:21.666 } 00:16:21.666 ] 00:16:21.666 }' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 Malloc1 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 [2024-10-14 13:27:13.448759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:21.666 [2024-10-14 13:27:13.471313] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:21.666 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:21.666 could not add new controller: failed to write to nvme-fabrics device 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.666 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:22.599 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:22.599 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.599 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.599 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.599 13:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.557 [2024-10-14 13:27:16.238102] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:24.557 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:24.557 could not add new controller: failed to write to nvme-fabrics device 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.557 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.168 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.168 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:25.168 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.168 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:25.168 13:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.211 13:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.211 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.211 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:27.211 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.211 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.212 [2024-10-14 13:27:19.017494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.212 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.230 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.230 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.230 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.230 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.230 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.310 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.310 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.310 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 [2024-10-14 13:27:21.836875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.311 13:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.603 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:30.603 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:30.603 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.603 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:30.603 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:32.595 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:32.596 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:32.596 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 [2024-10-14 13:27:24.623989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.854 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.787 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.787 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:33.787 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.787 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:33.787 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 [2024-10-14 13:27:27.430058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.686 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.256 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.256 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:36.256 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.256 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:36.256 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 [2024-10-14 13:27:30.195593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.046 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.046 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.046 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.046 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:39.046 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.575 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 [2024-10-14 13:27:32.962519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 [2024-10-14 13:27:33.010600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 [2024-10-14 13:27:33.058751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 [2024-10-14 13:27:33.106917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.576 [2024-10-14 13:27:33.155091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.576 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:41.577 "tick_rate": 2700000000, 00:16:41.577 "poll_groups": [ 00:16:41.577 { 00:16:41.577 "name": "nvmf_tgt_poll_group_000", 00:16:41.577 "admin_qpairs": 2, 00:16:41.577 "io_qpairs": 84, 00:16:41.577 "current_admin_qpairs": 0, 00:16:41.577 "current_io_qpairs": 0, 00:16:41.577 "pending_bdev_io": 0, 00:16:41.577 "completed_nvme_io": 182, 00:16:41.577 "transports": [ 00:16:41.577 { 00:16:41.577 "trtype": "TCP" 00:16:41.577 } 00:16:41.577 ] 00:16:41.577 }, 00:16:41.577 { 00:16:41.577 "name": "nvmf_tgt_poll_group_001", 00:16:41.577 "admin_qpairs": 2, 00:16:41.577 "io_qpairs": 84, 00:16:41.577 "current_admin_qpairs": 0, 00:16:41.577 "current_io_qpairs": 0, 00:16:41.577 "pending_bdev_io": 0, 00:16:41.577 "completed_nvme_io": 186, 00:16:41.577 "transports": [ 00:16:41.577 { 00:16:41.577 "trtype": "TCP" 00:16:41.577 } 00:16:41.577 ] 00:16:41.577 }, 00:16:41.577 { 00:16:41.577 "name": "nvmf_tgt_poll_group_002", 00:16:41.577 "admin_qpairs": 1, 00:16:41.577 "io_qpairs": 84, 00:16:41.577 "current_admin_qpairs": 0, 00:16:41.577 "current_io_qpairs": 0, 00:16:41.577 "pending_bdev_io": 0, 00:16:41.577 "completed_nvme_io": 183, 00:16:41.577 "transports": [ 00:16:41.577 { 00:16:41.577 "trtype": "TCP" 00:16:41.577 } 00:16:41.577 ] 00:16:41.577 }, 00:16:41.577 { 00:16:41.577 "name": "nvmf_tgt_poll_group_003", 00:16:41.577 "admin_qpairs": 2, 00:16:41.577 "io_qpairs": 84, 00:16:41.577 "current_admin_qpairs": 0, 00:16:41.577 "current_io_qpairs": 0, 00:16:41.577 "pending_bdev_io": 0, 00:16:41.577 "completed_nvme_io": 135, 00:16:41.577 "transports": [ 00:16:41.577 { 00:16:41.577 "trtype": "TCP" 00:16:41.577 } 00:16:41.577 ] 00:16:41.577 } 00:16:41.577 ] 00:16:41.577 }' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.577 rmmod nvme_tcp 00:16:41.577 rmmod nvme_fabrics 00:16:41.577 rmmod nvme_keyring 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 209717 ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 209717 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 209717 ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 209717 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209717 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209717' 00:16:41.577 killing process with pid 209717 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 209717 00:16:41.577 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 209717 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.836 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:44.376 00:16:44.376 real 0m24.793s 00:16:44.376 user 1m21.134s 00:16:44.376 sys 0m3.975s 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.376 ************************************ 00:16:44.376 END TEST nvmf_rpc 00:16:44.376 ************************************ 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.376 ************************************ 00:16:44.376 START TEST nvmf_invalid 00:16:44.376 ************************************ 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:44.376 * Looking for test storage... 00:16:44.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:44.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.376 --rc genhtml_branch_coverage=1 00:16:44.376 --rc genhtml_function_coverage=1 00:16:44.376 --rc genhtml_legend=1 00:16:44.376 --rc geninfo_all_blocks=1 00:16:44.376 --rc geninfo_unexecuted_blocks=1 00:16:44.376 00:16:44.376 ' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:44.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.376 --rc genhtml_branch_coverage=1 00:16:44.376 --rc genhtml_function_coverage=1 00:16:44.376 --rc genhtml_legend=1 00:16:44.376 --rc geninfo_all_blocks=1 00:16:44.376 --rc geninfo_unexecuted_blocks=1 00:16:44.376 00:16:44.376 ' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:44.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.376 --rc genhtml_branch_coverage=1 00:16:44.376 --rc genhtml_function_coverage=1 00:16:44.376 --rc genhtml_legend=1 00:16:44.376 --rc geninfo_all_blocks=1 00:16:44.376 --rc geninfo_unexecuted_blocks=1 00:16:44.376 00:16:44.376 ' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:44.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.376 --rc genhtml_branch_coverage=1 00:16:44.376 --rc genhtml_function_coverage=1 00:16:44.376 --rc genhtml_legend=1 00:16:44.376 --rc geninfo_all_blocks=1 00:16:44.376 --rc geninfo_unexecuted_blocks=1 00:16:44.376 00:16:44.376 ' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.376 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.377 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:46.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:46.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:46.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:46.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:16:46.283 00:16:46.283 --- 10.0.0.2 ping statistics --- 00:16:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.283 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:16:46.283 00:16:46.283 --- 10.0.0.1 ping statistics --- 00:16:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.283 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=214262 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 214262 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 214262 ']' 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.283 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:46.283 [2024-10-14 13:27:38.052741] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:16:46.283 [2024-10-14 13:27:38.052855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.283 [2024-10-14 13:27:38.123024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.542 [2024-10-14 13:27:38.172162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.542 [2024-10-14 13:27:38.172210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.542 [2024-10-14 13:27:38.172238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.542 [2024-10-14 13:27:38.172250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.542 [2024-10-14 13:27:38.172260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.542 [2024-10-14 13:27:38.173791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.542 [2024-10-14 13:27:38.173889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.542 [2024-10-14 13:27:38.173971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.542 [2024-10-14 13:27:38.173975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:46.542 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6832 00:16:46.800 [2024-10-14 13:27:38.566181] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:46.800 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:46.800 { 00:16:46.800 "nqn": "nqn.2016-06.io.spdk:cnode6832", 00:16:46.800 "tgt_name": "foobar", 00:16:46.800 "method": "nvmf_create_subsystem", 00:16:46.800 "req_id": 1 00:16:46.800 } 00:16:46.800 Got JSON-RPC error response 00:16:46.800 response: 00:16:46.800 { 00:16:46.800 "code": -32603, 00:16:46.800 "message": "Unable to find target foobar" 00:16:46.800 }' 00:16:46.800 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:46.800 { 00:16:46.800 "nqn": "nqn.2016-06.io.spdk:cnode6832", 00:16:46.800 "tgt_name": "foobar", 00:16:46.800 "method": "nvmf_create_subsystem", 00:16:46.800 "req_id": 1 00:16:46.800 } 00:16:46.800 Got JSON-RPC error response 00:16:46.800 response: 00:16:46.800 { 00:16:46.800 "code": -32603, 00:16:46.800 "message": "Unable to find target foobar" 00:16:46.800 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:46.800 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:46.800 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9631 00:16:47.058 [2024-10-14 13:27:38.843097] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9631: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:47.058 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:47.058 { 00:16:47.058 "nqn": "nqn.2016-06.io.spdk:cnode9631", 00:16:47.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:47.058 "method": "nvmf_create_subsystem", 00:16:47.058 "req_id": 1 00:16:47.058 } 00:16:47.058 Got JSON-RPC error response 00:16:47.058 response: 00:16:47.058 { 00:16:47.058 "code": -32602, 00:16:47.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:47.058 }' 00:16:47.058 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:47.058 { 00:16:47.058 "nqn": "nqn.2016-06.io.spdk:cnode9631", 00:16:47.058 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:47.058 "method": "nvmf_create_subsystem", 00:16:47.058 "req_id": 1 00:16:47.058 } 00:16:47.058 Got JSON-RPC error response 00:16:47.058 response: 00:16:47.058 { 00:16:47.058 "code": -32602, 00:16:47.058 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:47.058 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:47.058 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:47.058 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10671 00:16:47.316 [2024-10-14 13:27:39.120079] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10671: invalid model number 'SPDK_Controller' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:47.316 { 00:16:47.316 "nqn": "nqn.2016-06.io.spdk:cnode10671", 00:16:47.316 "model_number": "SPDK_Controller\u001f", 00:16:47.316 "method": "nvmf_create_subsystem", 00:16:47.316 "req_id": 1 00:16:47.316 } 00:16:47.316 Got JSON-RPC error response 00:16:47.316 response: 00:16:47.316 { 00:16:47.316 "code": -32602, 00:16:47.316 "message": "Invalid MN SPDK_Controller\u001f" 00:16:47.316 }' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:47.316 { 00:16:47.316 "nqn": "nqn.2016-06.io.spdk:cnode10671", 00:16:47.316 "model_number": "SPDK_Controller\u001f", 00:16:47.316 "method": "nvmf_create_subsystem", 00:16:47.316 "req_id": 1 00:16:47.316 } 00:16:47.316 Got JSON-RPC error response 00:16:47.316 response: 00:16:47.316 { 00:16:47.316 "code": -32602, 00:16:47.316 "message": "Invalid MN SPDK_Controller\u001f" 00:16:47.316 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.316 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.574 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'EM~tO4@r(;OE(^{LCY6t[' 00:16:47.575 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'EM~tO4@r(;OE(^{LCY6t[' nqn.2016-06.io.spdk:cnode16572 00:16:47.834 [2024-10-14 13:27:39.465219] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16572: invalid serial number 'EM~tO4@r(;OE(^{LCY6t[' 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:47.834 { 00:16:47.834 "nqn": "nqn.2016-06.io.spdk:cnode16572", 00:16:47.834 "serial_number": "EM~tO4@r(;OE(^{LCY6t[", 00:16:47.834 "method": "nvmf_create_subsystem", 00:16:47.834 "req_id": 1 00:16:47.834 } 00:16:47.834 Got JSON-RPC error response 00:16:47.834 response: 00:16:47.834 { 00:16:47.834 "code": -32602, 00:16:47.834 "message": "Invalid SN EM~tO4@r(;OE(^{LCY6t[" 00:16:47.834 }' 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:47.834 { 00:16:47.834 "nqn": "nqn.2016-06.io.spdk:cnode16572", 00:16:47.834 "serial_number": "EM~tO4@r(;OE(^{LCY6t[", 00:16:47.834 "method": "nvmf_create_subsystem", 00:16:47.834 "req_id": 1 00:16:47.834 } 00:16:47.834 Got JSON-RPC error response 00:16:47.834 response: 00:16:47.834 { 00:16:47.834 "code": -32602, 00:16:47.834 "message": "Invalid SN EM~tO4@r(;OE(^{LCY6t[" 00:16:47.834 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:47.834 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:47.835 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[' 00:16:47.836 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[' nqn.2016-06.io.spdk:cnode30362 00:16:48.094 [2024-10-14 13:27:39.886592] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30362: invalid model number 'x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[' 00:16:48.094 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:48.094 { 00:16:48.094 "nqn": "nqn.2016-06.io.spdk:cnode30362", 00:16:48.094 "model_number": "x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[", 00:16:48.094 "method": "nvmf_create_subsystem", 00:16:48.094 "req_id": 1 00:16:48.094 } 00:16:48.094 Got JSON-RPC error response 00:16:48.094 response: 00:16:48.094 { 00:16:48.094 "code": -32602, 00:16:48.094 "message": "Invalid MN x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[" 00:16:48.094 }' 00:16:48.094 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:48.094 { 00:16:48.094 "nqn": "nqn.2016-06.io.spdk:cnode30362", 00:16:48.094 "model_number": "x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[", 00:16:48.094 "method": "nvmf_create_subsystem", 00:16:48.094 "req_id": 1 00:16:48.094 } 00:16:48.094 Got JSON-RPC error response 00:16:48.094 response: 00:16:48.094 { 00:16:48.094 "code": -32602, 00:16:48.094 "message": "Invalid MN x^*oF~C4Jy;`4ia{SW1CWK 5YmAg%Fo&]lh%[" 00:16:48.094 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:48.094 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:48.352 [2024-10-14 13:27:40.175673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.352 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:48.918 [2024-10-14 13:27:40.737517] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:48.918 { 00:16:48.918 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:48.918 "listen_address": { 00:16:48.918 "trtype": "tcp", 00:16:48.918 "traddr": "", 00:16:48.918 "trsvcid": "4421" 00:16:48.918 }, 00:16:48.918 "method": "nvmf_subsystem_remove_listener", 00:16:48.918 "req_id": 1 00:16:48.918 } 00:16:48.918 Got JSON-RPC error response 00:16:48.918 response: 00:16:48.918 { 00:16:48.918 "code": -32602, 00:16:48.918 "message": "Invalid parameters" 00:16:48.918 }' 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:48.918 { 00:16:48.918 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:48.918 "listen_address": { 00:16:48.918 "trtype": "tcp", 00:16:48.918 "traddr": "", 00:16:48.918 "trsvcid": "4421" 00:16:48.918 }, 00:16:48.918 "method": "nvmf_subsystem_remove_listener", 00:16:48.918 "req_id": 1 00:16:48.918 } 00:16:48.918 Got JSON-RPC error response 00:16:48.918 response: 00:16:48.918 { 00:16:48.918 "code": -32602, 00:16:48.918 "message": "Invalid parameters" 00:16:48.918 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:48.918 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18210 -i 0 00:16:49.175 [2024-10-14 13:27:40.998316] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18210: invalid cntlid range [0-65519] 00:16:49.176 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:49.176 { 00:16:49.176 "nqn": "nqn.2016-06.io.spdk:cnode18210", 00:16:49.176 "min_cntlid": 0, 00:16:49.176 "method": "nvmf_create_subsystem", 00:16:49.176 "req_id": 1 00:16:49.176 } 00:16:49.176 Got JSON-RPC error response 00:16:49.176 response: 00:16:49.176 { 00:16:49.176 "code": -32602, 00:16:49.176 "message": "Invalid cntlid range [0-65519]" 00:16:49.176 }' 00:16:49.176 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:49.176 { 00:16:49.176 "nqn": "nqn.2016-06.io.spdk:cnode18210", 00:16:49.176 "min_cntlid": 0, 00:16:49.176 "method": "nvmf_create_subsystem", 00:16:49.176 "req_id": 1 00:16:49.176 } 00:16:49.176 Got JSON-RPC error response 00:16:49.176 response: 00:16:49.176 { 00:16:49.176 "code": -32602, 00:16:49.176 "message": "Invalid cntlid range [0-65519]" 00:16:49.176 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:49.176 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23955 -i 65520 00:16:49.434 [2024-10-14 13:27:41.267219] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23955: invalid cntlid range [65520-65519] 00:16:49.434 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:49.434 { 00:16:49.434 "nqn": "nqn.2016-06.io.spdk:cnode23955", 00:16:49.434 "min_cntlid": 65520, 00:16:49.434 "method": "nvmf_create_subsystem", 00:16:49.434 "req_id": 1 00:16:49.434 } 00:16:49.434 Got JSON-RPC error response 00:16:49.434 response: 00:16:49.434 { 00:16:49.434 "code": -32602, 00:16:49.434 "message": "Invalid cntlid range [65520-65519]" 00:16:49.434 }' 00:16:49.434 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:49.434 { 00:16:49.434 "nqn": "nqn.2016-06.io.spdk:cnode23955", 00:16:49.434 "min_cntlid": 65520, 00:16:49.434 "method": "nvmf_create_subsystem", 00:16:49.434 "req_id": 1 00:16:49.434 } 00:16:49.434 Got JSON-RPC error response 00:16:49.434 response: 00:16:49.434 { 00:16:49.434 "code": -32602, 00:16:49.434 "message": "Invalid cntlid range [65520-65519]" 00:16:49.434 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:49.691 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27536 -I 0 00:16:49.949 [2024-10-14 13:27:41.552170] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27536: invalid cntlid range [1-0] 00:16:49.949 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:49.949 { 00:16:49.949 "nqn": "nqn.2016-06.io.spdk:cnode27536", 00:16:49.949 "max_cntlid": 0, 00:16:49.949 "method": "nvmf_create_subsystem", 00:16:49.949 "req_id": 1 00:16:49.949 } 00:16:49.949 Got JSON-RPC error response 00:16:49.949 response: 00:16:49.949 { 00:16:49.949 "code": -32602, 00:16:49.949 "message": "Invalid cntlid range [1-0]" 00:16:49.949 }' 00:16:49.949 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:49.949 { 00:16:49.949 "nqn": "nqn.2016-06.io.spdk:cnode27536", 00:16:49.949 "max_cntlid": 0, 00:16:49.949 "method": "nvmf_create_subsystem", 00:16:49.949 "req_id": 1 00:16:49.949 } 00:16:49.949 Got JSON-RPC error response 00:16:49.949 response: 00:16:49.949 { 00:16:49.949 "code": -32602, 00:16:49.949 "message": "Invalid cntlid range [1-0]" 00:16:49.949 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:49.949 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8996 -I 65520 00:16:50.207 [2024-10-14 13:27:41.829043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8996: invalid cntlid range [1-65520] 00:16:50.207 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:50.207 { 00:16:50.207 "nqn": "nqn.2016-06.io.spdk:cnode8996", 00:16:50.207 "max_cntlid": 65520, 00:16:50.207 "method": "nvmf_create_subsystem", 00:16:50.207 "req_id": 1 00:16:50.207 } 00:16:50.207 Got JSON-RPC error response 00:16:50.207 response: 00:16:50.207 { 00:16:50.207 "code": -32602, 00:16:50.207 "message": "Invalid cntlid range [1-65520]" 00:16:50.207 }' 00:16:50.207 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:50.207 { 00:16:50.207 "nqn": "nqn.2016-06.io.spdk:cnode8996", 00:16:50.207 "max_cntlid": 65520, 00:16:50.207 "method": "nvmf_create_subsystem", 00:16:50.207 "req_id": 1 00:16:50.207 } 00:16:50.207 Got JSON-RPC error response 00:16:50.207 response: 00:16:50.207 { 00:16:50.207 "code": -32602, 00:16:50.207 "message": "Invalid cntlid range [1-65520]" 00:16:50.207 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:50.207 13:27:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24794 -i 6 -I 5 00:16:50.465 [2024-10-14 13:27:42.093950] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24794: invalid cntlid range [6-5] 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:50.465 { 00:16:50.465 "nqn": "nqn.2016-06.io.spdk:cnode24794", 00:16:50.465 "min_cntlid": 6, 00:16:50.465 "max_cntlid": 5, 00:16:50.465 "method": "nvmf_create_subsystem", 00:16:50.465 "req_id": 1 00:16:50.465 } 00:16:50.465 Got JSON-RPC error response 00:16:50.465 response: 00:16:50.465 { 00:16:50.465 "code": -32602, 00:16:50.465 "message": "Invalid cntlid range [6-5]" 00:16:50.465 }' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:50.465 { 00:16:50.465 "nqn": "nqn.2016-06.io.spdk:cnode24794", 00:16:50.465 "min_cntlid": 6, 00:16:50.465 "max_cntlid": 5, 00:16:50.465 "method": "nvmf_create_subsystem", 00:16:50.465 "req_id": 1 00:16:50.465 } 00:16:50.465 Got JSON-RPC error response 00:16:50.465 response: 00:16:50.465 { 00:16:50.465 "code": -32602, 00:16:50.465 "message": "Invalid cntlid range [6-5]" 00:16:50.465 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:50.465 { 00:16:50.465 "name": "foobar", 00:16:50.465 "method": "nvmf_delete_target", 00:16:50.465 "req_id": 1 00:16:50.465 } 00:16:50.465 Got JSON-RPC error response 00:16:50.465 response: 00:16:50.465 { 00:16:50.465 "code": -32602, 00:16:50.465 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:50.465 }' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:50.465 { 00:16:50.465 "name": "foobar", 00:16:50.465 "method": "nvmf_delete_target", 00:16:50.465 "req_id": 1 00:16:50.465 } 00:16:50.465 Got JSON-RPC error response 00:16:50.465 response: 00:16:50.465 { 00:16:50.465 "code": -32602, 00:16:50.465 "message": "The specified target doesn't exist, cannot delete it." 00:16:50.465 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.465 rmmod nvme_tcp 00:16:50.465 rmmod nvme_fabrics 00:16:50.465 rmmod nvme_keyring 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 214262 ']' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 214262 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 214262 ']' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 214262 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 214262 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 214262' 00:16:50.465 killing process with pid 214262 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 214262 00:16:50.465 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 214262 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.723 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:53.259 00:16:53.259 real 0m8.889s 00:16:53.259 user 0m21.351s 00:16:53.259 sys 0m2.456s 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:53.259 ************************************ 00:16:53.259 END TEST nvmf_invalid 00:16:53.259 ************************************ 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:53.259 ************************************ 00:16:53.259 START TEST nvmf_connect_stress 00:16:53.259 ************************************ 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:53.259 * Looking for test storage... 00:16:53.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:53.259 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.260 --rc genhtml_branch_coverage=1 00:16:53.260 --rc genhtml_function_coverage=1 00:16:53.260 --rc genhtml_legend=1 00:16:53.260 --rc geninfo_all_blocks=1 00:16:53.260 --rc geninfo_unexecuted_blocks=1 00:16:53.260 00:16:53.260 ' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.260 --rc genhtml_branch_coverage=1 00:16:53.260 --rc genhtml_function_coverage=1 00:16:53.260 --rc genhtml_legend=1 00:16:53.260 --rc geninfo_all_blocks=1 00:16:53.260 --rc geninfo_unexecuted_blocks=1 00:16:53.260 00:16:53.260 ' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.260 --rc genhtml_branch_coverage=1 00:16:53.260 --rc genhtml_function_coverage=1 00:16:53.260 --rc genhtml_legend=1 00:16:53.260 --rc geninfo_all_blocks=1 00:16:53.260 --rc geninfo_unexecuted_blocks=1 00:16:53.260 00:16:53.260 ' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:53.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.260 --rc genhtml_branch_coverage=1 00:16:53.260 --rc genhtml_function_coverage=1 00:16:53.260 --rc genhtml_legend=1 00:16:53.260 --rc geninfo_all_blocks=1 00:16:53.260 --rc geninfo_unexecuted_blocks=1 00:16:53.260 00:16:53.260 ' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:53.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:53.260 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:55.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:55.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:55.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:55.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.163 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:55.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:16:55.164 00:16:55.164 --- 10.0.0.2 ping statistics --- 00:16:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.164 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:16:55.164 00:16:55.164 --- 10.0.0.1 ping statistics --- 00:16:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.164 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=216901 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 216901 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 216901 ']' 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.164 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.164 [2024-10-14 13:27:46.947069] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:16:55.164 [2024-10-14 13:27:46.947162] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.164 [2024-10-14 13:27:47.011616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.422 [2024-10-14 13:27:47.060733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.422 [2024-10-14 13:27:47.060784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.422 [2024-10-14 13:27:47.060812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.422 [2024-10-14 13:27:47.060822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.422 [2024-10-14 13:27:47.060831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.422 [2024-10-14 13:27:47.062349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.422 [2024-10-14 13:27:47.062374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.422 [2024-10-14 13:27:47.062378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.422 [2024-10-14 13:27:47.261631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.422 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.680 [2024-10-14 13:27:47.278793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.680 NULL1 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217038 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.680 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.939 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.939 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:55.939 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.939 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.939 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.197 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.197 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:56.197 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.197 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.197 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.454 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.454 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:56.454 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.454 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.454 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.019 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.019 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:57.019 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.019 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.019 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.277 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.277 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:57.277 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.277 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.277 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.534 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.534 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:57.534 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.534 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.534 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.791 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.791 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:57.791 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.791 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.791 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.049 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.049 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:58.049 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.049 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.049 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.616 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.616 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:58.616 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.616 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.616 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.873 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.873 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:58.873 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.873 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.873 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.131 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:59.131 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.131 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.131 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.389 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.389 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:59.389 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.389 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.389 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.954 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.954 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:16:59.954 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.954 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.954 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.212 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.212 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:00.212 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.212 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.212 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.470 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.470 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:00.470 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.470 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.470 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.727 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.727 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:00.727 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.727 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.727 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:00.985 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.985 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:00.985 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.985 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.985 13:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.551 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.551 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:01.551 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.551 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.551 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:01.809 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.809 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:01.809 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.809 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.809 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.068 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.068 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:02.068 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.068 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.068 13:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.326 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.326 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:02.326 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.326 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.326 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.585 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.585 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:02.585 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.585 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.585 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.151 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.151 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:03.151 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.151 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.151 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.408 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.408 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:03.408 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.408 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.408 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.666 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.666 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:03.666 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.666 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.666 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:03.924 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.924 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:03.924 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.924 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.924 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.182 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.182 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:04.182 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.182 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.182 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.747 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.747 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:04.747 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.747 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.747 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.005 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.005 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:05.005 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.005 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.005 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.263 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.263 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:05.263 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.263 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.263 13:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.521 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.521 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:05.521 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.521 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.521 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:05.779 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217038 00:17:05.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217038) - No such process 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217038 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.779 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.779 rmmod nvme_tcp 00:17:06.039 rmmod nvme_fabrics 00:17:06.039 rmmod nvme_keyring 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 216901 ']' 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 216901 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 216901 ']' 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 216901 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 216901 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 216901' 00:17:06.039 killing process with pid 216901 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 216901 00:17:06.039 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 216901 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.296 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.204 00:17:08.204 real 0m15.372s 00:17:08.204 user 0m40.137s 00:17:08.204 sys 0m4.537s 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.204 ************************************ 00:17:08.204 END TEST nvmf_connect_stress 00:17:08.204 ************************************ 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.204 13:27:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.204 ************************************ 00:17:08.204 START TEST nvmf_fused_ordering 00:17:08.204 ************************************ 00:17:08.204 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:08.204 * Looking for test storage... 00:17:08.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.204 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.465 --rc genhtml_branch_coverage=1 00:17:08.465 --rc genhtml_function_coverage=1 00:17:08.465 --rc genhtml_legend=1 00:17:08.465 --rc geninfo_all_blocks=1 00:17:08.465 --rc geninfo_unexecuted_blocks=1 00:17:08.465 00:17:08.465 ' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.465 --rc genhtml_branch_coverage=1 00:17:08.465 --rc genhtml_function_coverage=1 00:17:08.465 --rc genhtml_legend=1 00:17:08.465 --rc geninfo_all_blocks=1 00:17:08.465 --rc geninfo_unexecuted_blocks=1 00:17:08.465 00:17:08.465 ' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.465 --rc genhtml_branch_coverage=1 00:17:08.465 --rc genhtml_function_coverage=1 00:17:08.465 --rc genhtml_legend=1 00:17:08.465 --rc geninfo_all_blocks=1 00:17:08.465 --rc geninfo_unexecuted_blocks=1 00:17:08.465 00:17:08.465 ' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:08.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.465 --rc genhtml_branch_coverage=1 00:17:08.465 --rc genhtml_function_coverage=1 00:17:08.465 --rc genhtml_legend=1 00:17:08.465 --rc geninfo_all_blocks=1 00:17:08.465 --rc geninfo_unexecuted_blocks=1 00:17:08.465 00:17:08.465 ' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.465 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.466 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:10.997 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:10.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:17:10.998 00:17:10.998 --- 10.0.0.2 ping statistics --- 00:17:10.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.998 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:17:10.998 00:17:10.998 --- 10.0.0.1 ping statistics --- 00:17:10.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.998 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:10.998 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=220191 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 220191 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 220191 ']' 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 [2024-10-14 13:28:02.540811] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:10.999 [2024-10-14 13:28:02.540895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.999 [2024-10-14 13:28:02.603986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.999 [2024-10-14 13:28:02.648049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.999 [2024-10-14 13:28:02.648104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.999 [2024-10-14 13:28:02.648139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.999 [2024-10-14 13:28:02.648151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.999 [2024-10-14 13:28:02.648160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.999 [2024-10-14 13:28:02.648754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 [2024-10-14 13:28:02.783304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 [2024-10-14 13:28:02.799525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 NULL1 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.999 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:10.999 [2024-10-14 13:28:02.842978] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:10.999 [2024-10-14 13:28:02.843011] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220265 ] 00:17:11.565 Attached to nqn.2016-06.io.spdk:cnode1 00:17:11.565 Namespace ID: 1 size: 1GB 00:17:11.565 fused_ordering(0) 00:17:11.565 fused_ordering(1) 00:17:11.565 fused_ordering(2) 00:17:11.565 fused_ordering(3) 00:17:11.565 fused_ordering(4) 00:17:11.565 fused_ordering(5) 00:17:11.565 fused_ordering(6) 00:17:11.565 fused_ordering(7) 00:17:11.565 fused_ordering(8) 00:17:11.565 fused_ordering(9) 00:17:11.565 fused_ordering(10) 00:17:11.565 fused_ordering(11) 00:17:11.565 fused_ordering(12) 00:17:11.565 fused_ordering(13) 00:17:11.565 fused_ordering(14) 00:17:11.565 fused_ordering(15) 00:17:11.565 fused_ordering(16) 00:17:11.565 fused_ordering(17) 00:17:11.565 fused_ordering(18) 00:17:11.565 fused_ordering(19) 00:17:11.565 fused_ordering(20) 00:17:11.565 fused_ordering(21) 00:17:11.565 fused_ordering(22) 00:17:11.565 fused_ordering(23) 00:17:11.565 fused_ordering(24) 00:17:11.565 fused_ordering(25) 00:17:11.565 fused_ordering(26) 00:17:11.565 fused_ordering(27) 00:17:11.565 fused_ordering(28) 00:17:11.565 fused_ordering(29) 00:17:11.565 fused_ordering(30) 00:17:11.565 fused_ordering(31) 00:17:11.565 fused_ordering(32) 00:17:11.565 fused_ordering(33) 00:17:11.565 fused_ordering(34) 00:17:11.565 fused_ordering(35) 00:17:11.565 fused_ordering(36) 00:17:11.565 fused_ordering(37) 00:17:11.565 fused_ordering(38) 00:17:11.565 fused_ordering(39) 00:17:11.565 fused_ordering(40) 00:17:11.565 fused_ordering(41) 00:17:11.565 fused_ordering(42) 00:17:11.565 fused_ordering(43) 00:17:11.565 fused_ordering(44) 00:17:11.565 fused_ordering(45) 00:17:11.565 fused_ordering(46) 00:17:11.565 fused_ordering(47) 00:17:11.565 fused_ordering(48) 00:17:11.565 fused_ordering(49) 00:17:11.565 fused_ordering(50) 00:17:11.565 fused_ordering(51) 00:17:11.565 fused_ordering(52) 00:17:11.565 fused_ordering(53) 00:17:11.565 fused_ordering(54) 00:17:11.565 fused_ordering(55) 00:17:11.565 fused_ordering(56) 00:17:11.565 fused_ordering(57) 00:17:11.565 fused_ordering(58) 00:17:11.565 fused_ordering(59) 00:17:11.565 fused_ordering(60) 00:17:11.565 fused_ordering(61) 00:17:11.565 fused_ordering(62) 00:17:11.565 fused_ordering(63) 00:17:11.565 fused_ordering(64) 00:17:11.565 fused_ordering(65) 00:17:11.565 fused_ordering(66) 00:17:11.565 fused_ordering(67) 00:17:11.565 fused_ordering(68) 00:17:11.565 fused_ordering(69) 00:17:11.565 fused_ordering(70) 00:17:11.565 fused_ordering(71) 00:17:11.565 fused_ordering(72) 00:17:11.565 fused_ordering(73) 00:17:11.565 fused_ordering(74) 00:17:11.565 fused_ordering(75) 00:17:11.565 fused_ordering(76) 00:17:11.565 fused_ordering(77) 00:17:11.565 fused_ordering(78) 00:17:11.565 fused_ordering(79) 00:17:11.565 fused_ordering(80) 00:17:11.565 fused_ordering(81) 00:17:11.565 fused_ordering(82) 00:17:11.565 fused_ordering(83) 00:17:11.565 fused_ordering(84) 00:17:11.565 fused_ordering(85) 00:17:11.565 fused_ordering(86) 00:17:11.565 fused_ordering(87) 00:17:11.565 fused_ordering(88) 00:17:11.565 fused_ordering(89) 00:17:11.565 fused_ordering(90) 00:17:11.565 fused_ordering(91) 00:17:11.565 fused_ordering(92) 00:17:11.565 fused_ordering(93) 00:17:11.565 fused_ordering(94) 00:17:11.565 fused_ordering(95) 00:17:11.565 fused_ordering(96) 00:17:11.565 fused_ordering(97) 00:17:11.565 fused_ordering(98) 00:17:11.565 fused_ordering(99) 00:17:11.565 fused_ordering(100) 00:17:11.565 fused_ordering(101) 00:17:11.565 fused_ordering(102) 00:17:11.565 fused_ordering(103) 00:17:11.565 fused_ordering(104) 00:17:11.565 fused_ordering(105) 00:17:11.565 fused_ordering(106) 00:17:11.565 fused_ordering(107) 00:17:11.565 fused_ordering(108) 00:17:11.565 fused_ordering(109) 00:17:11.565 fused_ordering(110) 00:17:11.565 fused_ordering(111) 00:17:11.565 fused_ordering(112) 00:17:11.565 fused_ordering(113) 00:17:11.565 fused_ordering(114) 00:17:11.565 fused_ordering(115) 00:17:11.565 fused_ordering(116) 00:17:11.565 fused_ordering(117) 00:17:11.565 fused_ordering(118) 00:17:11.565 fused_ordering(119) 00:17:11.565 fused_ordering(120) 00:17:11.565 fused_ordering(121) 00:17:11.565 fused_ordering(122) 00:17:11.565 fused_ordering(123) 00:17:11.565 fused_ordering(124) 00:17:11.565 fused_ordering(125) 00:17:11.565 fused_ordering(126) 00:17:11.565 fused_ordering(127) 00:17:11.565 fused_ordering(128) 00:17:11.566 fused_ordering(129) 00:17:11.566 fused_ordering(130) 00:17:11.566 fused_ordering(131) 00:17:11.566 fused_ordering(132) 00:17:11.566 fused_ordering(133) 00:17:11.566 fused_ordering(134) 00:17:11.566 fused_ordering(135) 00:17:11.566 fused_ordering(136) 00:17:11.566 fused_ordering(137) 00:17:11.566 fused_ordering(138) 00:17:11.566 fused_ordering(139) 00:17:11.566 fused_ordering(140) 00:17:11.566 fused_ordering(141) 00:17:11.566 fused_ordering(142) 00:17:11.566 fused_ordering(143) 00:17:11.566 fused_ordering(144) 00:17:11.566 fused_ordering(145) 00:17:11.566 fused_ordering(146) 00:17:11.566 fused_ordering(147) 00:17:11.566 fused_ordering(148) 00:17:11.566 fused_ordering(149) 00:17:11.566 fused_ordering(150) 00:17:11.566 fused_ordering(151) 00:17:11.566 fused_ordering(152) 00:17:11.566 fused_ordering(153) 00:17:11.566 fused_ordering(154) 00:17:11.566 fused_ordering(155) 00:17:11.566 fused_ordering(156) 00:17:11.566 fused_ordering(157) 00:17:11.566 fused_ordering(158) 00:17:11.566 fused_ordering(159) 00:17:11.566 fused_ordering(160) 00:17:11.566 fused_ordering(161) 00:17:11.566 fused_ordering(162) 00:17:11.566 fused_ordering(163) 00:17:11.566 fused_ordering(164) 00:17:11.566 fused_ordering(165) 00:17:11.566 fused_ordering(166) 00:17:11.566 fused_ordering(167) 00:17:11.566 fused_ordering(168) 00:17:11.566 fused_ordering(169) 00:17:11.566 fused_ordering(170) 00:17:11.566 fused_ordering(171) 00:17:11.566 fused_ordering(172) 00:17:11.566 fused_ordering(173) 00:17:11.566 fused_ordering(174) 00:17:11.566 fused_ordering(175) 00:17:11.566 fused_ordering(176) 00:17:11.566 fused_ordering(177) 00:17:11.566 fused_ordering(178) 00:17:11.566 fused_ordering(179) 00:17:11.566 fused_ordering(180) 00:17:11.566 fused_ordering(181) 00:17:11.566 fused_ordering(182) 00:17:11.566 fused_ordering(183) 00:17:11.566 fused_ordering(184) 00:17:11.566 fused_ordering(185) 00:17:11.566 fused_ordering(186) 00:17:11.566 fused_ordering(187) 00:17:11.566 fused_ordering(188) 00:17:11.566 fused_ordering(189) 00:17:11.566 fused_ordering(190) 00:17:11.566 fused_ordering(191) 00:17:11.566 fused_ordering(192) 00:17:11.566 fused_ordering(193) 00:17:11.566 fused_ordering(194) 00:17:11.566 fused_ordering(195) 00:17:11.566 fused_ordering(196) 00:17:11.566 fused_ordering(197) 00:17:11.566 fused_ordering(198) 00:17:11.566 fused_ordering(199) 00:17:11.566 fused_ordering(200) 00:17:11.566 fused_ordering(201) 00:17:11.566 fused_ordering(202) 00:17:11.566 fused_ordering(203) 00:17:11.566 fused_ordering(204) 00:17:11.566 fused_ordering(205) 00:17:11.824 fused_ordering(206) 00:17:11.824 fused_ordering(207) 00:17:11.824 fused_ordering(208) 00:17:11.824 fused_ordering(209) 00:17:11.824 fused_ordering(210) 00:17:11.824 fused_ordering(211) 00:17:11.824 fused_ordering(212) 00:17:11.824 fused_ordering(213) 00:17:11.824 fused_ordering(214) 00:17:11.824 fused_ordering(215) 00:17:11.824 fused_ordering(216) 00:17:11.824 fused_ordering(217) 00:17:11.824 fused_ordering(218) 00:17:11.824 fused_ordering(219) 00:17:11.824 fused_ordering(220) 00:17:11.824 fused_ordering(221) 00:17:11.824 fused_ordering(222) 00:17:11.824 fused_ordering(223) 00:17:11.824 fused_ordering(224) 00:17:11.824 fused_ordering(225) 00:17:11.824 fused_ordering(226) 00:17:11.824 fused_ordering(227) 00:17:11.824 fused_ordering(228) 00:17:11.824 fused_ordering(229) 00:17:11.824 fused_ordering(230) 00:17:11.824 fused_ordering(231) 00:17:11.824 fused_ordering(232) 00:17:11.824 fused_ordering(233) 00:17:11.824 fused_ordering(234) 00:17:11.824 fused_ordering(235) 00:17:11.824 fused_ordering(236) 00:17:11.824 fused_ordering(237) 00:17:11.824 fused_ordering(238) 00:17:11.824 fused_ordering(239) 00:17:11.824 fused_ordering(240) 00:17:11.824 fused_ordering(241) 00:17:11.824 fused_ordering(242) 00:17:11.824 fused_ordering(243) 00:17:11.824 fused_ordering(244) 00:17:11.824 fused_ordering(245) 00:17:11.824 fused_ordering(246) 00:17:11.824 fused_ordering(247) 00:17:11.824 fused_ordering(248) 00:17:11.824 fused_ordering(249) 00:17:11.824 fused_ordering(250) 00:17:11.824 fused_ordering(251) 00:17:11.824 fused_ordering(252) 00:17:11.824 fused_ordering(253) 00:17:11.824 fused_ordering(254) 00:17:11.824 fused_ordering(255) 00:17:11.824 fused_ordering(256) 00:17:11.824 fused_ordering(257) 00:17:11.824 fused_ordering(258) 00:17:11.824 fused_ordering(259) 00:17:11.824 fused_ordering(260) 00:17:11.824 fused_ordering(261) 00:17:11.824 fused_ordering(262) 00:17:11.824 fused_ordering(263) 00:17:11.824 fused_ordering(264) 00:17:11.824 fused_ordering(265) 00:17:11.824 fused_ordering(266) 00:17:11.824 fused_ordering(267) 00:17:11.824 fused_ordering(268) 00:17:11.824 fused_ordering(269) 00:17:11.824 fused_ordering(270) 00:17:11.824 fused_ordering(271) 00:17:11.824 fused_ordering(272) 00:17:11.824 fused_ordering(273) 00:17:11.824 fused_ordering(274) 00:17:11.824 fused_ordering(275) 00:17:11.824 fused_ordering(276) 00:17:11.824 fused_ordering(277) 00:17:11.824 fused_ordering(278) 00:17:11.824 fused_ordering(279) 00:17:11.824 fused_ordering(280) 00:17:11.824 fused_ordering(281) 00:17:11.824 fused_ordering(282) 00:17:11.824 fused_ordering(283) 00:17:11.824 fused_ordering(284) 00:17:11.824 fused_ordering(285) 00:17:11.824 fused_ordering(286) 00:17:11.824 fused_ordering(287) 00:17:11.824 fused_ordering(288) 00:17:11.824 fused_ordering(289) 00:17:11.824 fused_ordering(290) 00:17:11.824 fused_ordering(291) 00:17:11.824 fused_ordering(292) 00:17:11.824 fused_ordering(293) 00:17:11.824 fused_ordering(294) 00:17:11.824 fused_ordering(295) 00:17:11.824 fused_ordering(296) 00:17:11.824 fused_ordering(297) 00:17:11.824 fused_ordering(298) 00:17:11.824 fused_ordering(299) 00:17:11.824 fused_ordering(300) 00:17:11.824 fused_ordering(301) 00:17:11.824 fused_ordering(302) 00:17:11.824 fused_ordering(303) 00:17:11.824 fused_ordering(304) 00:17:11.824 fused_ordering(305) 00:17:11.824 fused_ordering(306) 00:17:11.824 fused_ordering(307) 00:17:11.824 fused_ordering(308) 00:17:11.824 fused_ordering(309) 00:17:11.824 fused_ordering(310) 00:17:11.824 fused_ordering(311) 00:17:11.824 fused_ordering(312) 00:17:11.824 fused_ordering(313) 00:17:11.824 fused_ordering(314) 00:17:11.824 fused_ordering(315) 00:17:11.824 fused_ordering(316) 00:17:11.824 fused_ordering(317) 00:17:11.824 fused_ordering(318) 00:17:11.824 fused_ordering(319) 00:17:11.824 fused_ordering(320) 00:17:11.824 fused_ordering(321) 00:17:11.824 fused_ordering(322) 00:17:11.824 fused_ordering(323) 00:17:11.824 fused_ordering(324) 00:17:11.824 fused_ordering(325) 00:17:11.824 fused_ordering(326) 00:17:11.824 fused_ordering(327) 00:17:11.824 fused_ordering(328) 00:17:11.824 fused_ordering(329) 00:17:11.824 fused_ordering(330) 00:17:11.824 fused_ordering(331) 00:17:11.824 fused_ordering(332) 00:17:11.824 fused_ordering(333) 00:17:11.824 fused_ordering(334) 00:17:11.824 fused_ordering(335) 00:17:11.824 fused_ordering(336) 00:17:11.824 fused_ordering(337) 00:17:11.824 fused_ordering(338) 00:17:11.824 fused_ordering(339) 00:17:11.824 fused_ordering(340) 00:17:11.824 fused_ordering(341) 00:17:11.824 fused_ordering(342) 00:17:11.824 fused_ordering(343) 00:17:11.824 fused_ordering(344) 00:17:11.824 fused_ordering(345) 00:17:11.824 fused_ordering(346) 00:17:11.824 fused_ordering(347) 00:17:11.824 fused_ordering(348) 00:17:11.824 fused_ordering(349) 00:17:11.824 fused_ordering(350) 00:17:11.824 fused_ordering(351) 00:17:11.824 fused_ordering(352) 00:17:11.824 fused_ordering(353) 00:17:11.824 fused_ordering(354) 00:17:11.824 fused_ordering(355) 00:17:11.824 fused_ordering(356) 00:17:11.824 fused_ordering(357) 00:17:11.824 fused_ordering(358) 00:17:11.824 fused_ordering(359) 00:17:11.824 fused_ordering(360) 00:17:11.824 fused_ordering(361) 00:17:11.824 fused_ordering(362) 00:17:11.824 fused_ordering(363) 00:17:11.824 fused_ordering(364) 00:17:11.824 fused_ordering(365) 00:17:11.824 fused_ordering(366) 00:17:11.824 fused_ordering(367) 00:17:11.824 fused_ordering(368) 00:17:11.824 fused_ordering(369) 00:17:11.824 fused_ordering(370) 00:17:11.824 fused_ordering(371) 00:17:11.824 fused_ordering(372) 00:17:11.824 fused_ordering(373) 00:17:11.824 fused_ordering(374) 00:17:11.824 fused_ordering(375) 00:17:11.824 fused_ordering(376) 00:17:11.824 fused_ordering(377) 00:17:11.824 fused_ordering(378) 00:17:11.824 fused_ordering(379) 00:17:11.824 fused_ordering(380) 00:17:11.824 fused_ordering(381) 00:17:11.824 fused_ordering(382) 00:17:11.824 fused_ordering(383) 00:17:11.824 fused_ordering(384) 00:17:11.824 fused_ordering(385) 00:17:11.824 fused_ordering(386) 00:17:11.824 fused_ordering(387) 00:17:11.824 fused_ordering(388) 00:17:11.824 fused_ordering(389) 00:17:11.824 fused_ordering(390) 00:17:11.824 fused_ordering(391) 00:17:11.824 fused_ordering(392) 00:17:11.824 fused_ordering(393) 00:17:11.824 fused_ordering(394) 00:17:11.824 fused_ordering(395) 00:17:11.824 fused_ordering(396) 00:17:11.824 fused_ordering(397) 00:17:11.824 fused_ordering(398) 00:17:11.824 fused_ordering(399) 00:17:11.824 fused_ordering(400) 00:17:11.824 fused_ordering(401) 00:17:11.824 fused_ordering(402) 00:17:11.824 fused_ordering(403) 00:17:11.824 fused_ordering(404) 00:17:11.824 fused_ordering(405) 00:17:11.824 fused_ordering(406) 00:17:11.824 fused_ordering(407) 00:17:11.824 fused_ordering(408) 00:17:11.824 fused_ordering(409) 00:17:11.824 fused_ordering(410) 00:17:12.083 fused_ordering(411) 00:17:12.083 fused_ordering(412) 00:17:12.083 fused_ordering(413) 00:17:12.083 fused_ordering(414) 00:17:12.083 fused_ordering(415) 00:17:12.083 fused_ordering(416) 00:17:12.083 fused_ordering(417) 00:17:12.083 fused_ordering(418) 00:17:12.083 fused_ordering(419) 00:17:12.083 fused_ordering(420) 00:17:12.083 fused_ordering(421) 00:17:12.083 fused_ordering(422) 00:17:12.083 fused_ordering(423) 00:17:12.083 fused_ordering(424) 00:17:12.083 fused_ordering(425) 00:17:12.083 fused_ordering(426) 00:17:12.083 fused_ordering(427) 00:17:12.083 fused_ordering(428) 00:17:12.083 fused_ordering(429) 00:17:12.083 fused_ordering(430) 00:17:12.083 fused_ordering(431) 00:17:12.083 fused_ordering(432) 00:17:12.083 fused_ordering(433) 00:17:12.083 fused_ordering(434) 00:17:12.083 fused_ordering(435) 00:17:12.083 fused_ordering(436) 00:17:12.083 fused_ordering(437) 00:17:12.083 fused_ordering(438) 00:17:12.083 fused_ordering(439) 00:17:12.083 fused_ordering(440) 00:17:12.083 fused_ordering(441) 00:17:12.083 fused_ordering(442) 00:17:12.083 fused_ordering(443) 00:17:12.083 fused_ordering(444) 00:17:12.083 fused_ordering(445) 00:17:12.083 fused_ordering(446) 00:17:12.083 fused_ordering(447) 00:17:12.083 fused_ordering(448) 00:17:12.083 fused_ordering(449) 00:17:12.083 fused_ordering(450) 00:17:12.083 fused_ordering(451) 00:17:12.083 fused_ordering(452) 00:17:12.083 fused_ordering(453) 00:17:12.083 fused_ordering(454) 00:17:12.083 fused_ordering(455) 00:17:12.083 fused_ordering(456) 00:17:12.083 fused_ordering(457) 00:17:12.083 fused_ordering(458) 00:17:12.083 fused_ordering(459) 00:17:12.083 fused_ordering(460) 00:17:12.083 fused_ordering(461) 00:17:12.083 fused_ordering(462) 00:17:12.083 fused_ordering(463) 00:17:12.083 fused_ordering(464) 00:17:12.083 fused_ordering(465) 00:17:12.083 fused_ordering(466) 00:17:12.083 fused_ordering(467) 00:17:12.083 fused_ordering(468) 00:17:12.083 fused_ordering(469) 00:17:12.083 fused_ordering(470) 00:17:12.083 fused_ordering(471) 00:17:12.083 fused_ordering(472) 00:17:12.083 fused_ordering(473) 00:17:12.083 fused_ordering(474) 00:17:12.083 fused_ordering(475) 00:17:12.083 fused_ordering(476) 00:17:12.083 fused_ordering(477) 00:17:12.083 fused_ordering(478) 00:17:12.083 fused_ordering(479) 00:17:12.083 fused_ordering(480) 00:17:12.083 fused_ordering(481) 00:17:12.083 fused_ordering(482) 00:17:12.083 fused_ordering(483) 00:17:12.083 fused_ordering(484) 00:17:12.083 fused_ordering(485) 00:17:12.083 fused_ordering(486) 00:17:12.083 fused_ordering(487) 00:17:12.083 fused_ordering(488) 00:17:12.083 fused_ordering(489) 00:17:12.083 fused_ordering(490) 00:17:12.083 fused_ordering(491) 00:17:12.083 fused_ordering(492) 00:17:12.083 fused_ordering(493) 00:17:12.083 fused_ordering(494) 00:17:12.083 fused_ordering(495) 00:17:12.083 fused_ordering(496) 00:17:12.083 fused_ordering(497) 00:17:12.083 fused_ordering(498) 00:17:12.083 fused_ordering(499) 00:17:12.083 fused_ordering(500) 00:17:12.083 fused_ordering(501) 00:17:12.083 fused_ordering(502) 00:17:12.083 fused_ordering(503) 00:17:12.083 fused_ordering(504) 00:17:12.083 fused_ordering(505) 00:17:12.083 fused_ordering(506) 00:17:12.083 fused_ordering(507) 00:17:12.083 fused_ordering(508) 00:17:12.083 fused_ordering(509) 00:17:12.083 fused_ordering(510) 00:17:12.083 fused_ordering(511) 00:17:12.083 fused_ordering(512) 00:17:12.083 fused_ordering(513) 00:17:12.083 fused_ordering(514) 00:17:12.083 fused_ordering(515) 00:17:12.083 fused_ordering(516) 00:17:12.083 fused_ordering(517) 00:17:12.083 fused_ordering(518) 00:17:12.083 fused_ordering(519) 00:17:12.083 fused_ordering(520) 00:17:12.083 fused_ordering(521) 00:17:12.083 fused_ordering(522) 00:17:12.083 fused_ordering(523) 00:17:12.083 fused_ordering(524) 00:17:12.083 fused_ordering(525) 00:17:12.083 fused_ordering(526) 00:17:12.083 fused_ordering(527) 00:17:12.083 fused_ordering(528) 00:17:12.083 fused_ordering(529) 00:17:12.083 fused_ordering(530) 00:17:12.083 fused_ordering(531) 00:17:12.083 fused_ordering(532) 00:17:12.083 fused_ordering(533) 00:17:12.083 fused_ordering(534) 00:17:12.083 fused_ordering(535) 00:17:12.083 fused_ordering(536) 00:17:12.083 fused_ordering(537) 00:17:12.083 fused_ordering(538) 00:17:12.083 fused_ordering(539) 00:17:12.083 fused_ordering(540) 00:17:12.083 fused_ordering(541) 00:17:12.083 fused_ordering(542) 00:17:12.083 fused_ordering(543) 00:17:12.083 fused_ordering(544) 00:17:12.083 fused_ordering(545) 00:17:12.083 fused_ordering(546) 00:17:12.083 fused_ordering(547) 00:17:12.083 fused_ordering(548) 00:17:12.083 fused_ordering(549) 00:17:12.083 fused_ordering(550) 00:17:12.083 fused_ordering(551) 00:17:12.083 fused_ordering(552) 00:17:12.083 fused_ordering(553) 00:17:12.083 fused_ordering(554) 00:17:12.083 fused_ordering(555) 00:17:12.083 fused_ordering(556) 00:17:12.083 fused_ordering(557) 00:17:12.083 fused_ordering(558) 00:17:12.083 fused_ordering(559) 00:17:12.083 fused_ordering(560) 00:17:12.083 fused_ordering(561) 00:17:12.083 fused_ordering(562) 00:17:12.083 fused_ordering(563) 00:17:12.083 fused_ordering(564) 00:17:12.083 fused_ordering(565) 00:17:12.083 fused_ordering(566) 00:17:12.083 fused_ordering(567) 00:17:12.083 fused_ordering(568) 00:17:12.083 fused_ordering(569) 00:17:12.083 fused_ordering(570) 00:17:12.083 fused_ordering(571) 00:17:12.083 fused_ordering(572) 00:17:12.083 fused_ordering(573) 00:17:12.083 fused_ordering(574) 00:17:12.083 fused_ordering(575) 00:17:12.083 fused_ordering(576) 00:17:12.083 fused_ordering(577) 00:17:12.083 fused_ordering(578) 00:17:12.083 fused_ordering(579) 00:17:12.083 fused_ordering(580) 00:17:12.083 fused_ordering(581) 00:17:12.083 fused_ordering(582) 00:17:12.083 fused_ordering(583) 00:17:12.083 fused_ordering(584) 00:17:12.083 fused_ordering(585) 00:17:12.083 fused_ordering(586) 00:17:12.083 fused_ordering(587) 00:17:12.083 fused_ordering(588) 00:17:12.083 fused_ordering(589) 00:17:12.083 fused_ordering(590) 00:17:12.083 fused_ordering(591) 00:17:12.083 fused_ordering(592) 00:17:12.083 fused_ordering(593) 00:17:12.083 fused_ordering(594) 00:17:12.083 fused_ordering(595) 00:17:12.083 fused_ordering(596) 00:17:12.083 fused_ordering(597) 00:17:12.083 fused_ordering(598) 00:17:12.083 fused_ordering(599) 00:17:12.083 fused_ordering(600) 00:17:12.083 fused_ordering(601) 00:17:12.083 fused_ordering(602) 00:17:12.083 fused_ordering(603) 00:17:12.083 fused_ordering(604) 00:17:12.083 fused_ordering(605) 00:17:12.083 fused_ordering(606) 00:17:12.083 fused_ordering(607) 00:17:12.083 fused_ordering(608) 00:17:12.084 fused_ordering(609) 00:17:12.084 fused_ordering(610) 00:17:12.084 fused_ordering(611) 00:17:12.084 fused_ordering(612) 00:17:12.084 fused_ordering(613) 00:17:12.084 fused_ordering(614) 00:17:12.084 fused_ordering(615) 00:17:12.650 fused_ordering(616) 00:17:12.650 fused_ordering(617) 00:17:12.650 fused_ordering(618) 00:17:12.650 fused_ordering(619) 00:17:12.650 fused_ordering(620) 00:17:12.650 fused_ordering(621) 00:17:12.650 fused_ordering(622) 00:17:12.650 fused_ordering(623) 00:17:12.650 fused_ordering(624) 00:17:12.650 fused_ordering(625) 00:17:12.650 fused_ordering(626) 00:17:12.650 fused_ordering(627) 00:17:12.650 fused_ordering(628) 00:17:12.650 fused_ordering(629) 00:17:12.650 fused_ordering(630) 00:17:12.650 fused_ordering(631) 00:17:12.650 fused_ordering(632) 00:17:12.650 fused_ordering(633) 00:17:12.650 fused_ordering(634) 00:17:12.650 fused_ordering(635) 00:17:12.650 fused_ordering(636) 00:17:12.650 fused_ordering(637) 00:17:12.650 fused_ordering(638) 00:17:12.650 fused_ordering(639) 00:17:12.650 fused_ordering(640) 00:17:12.650 fused_ordering(641) 00:17:12.650 fused_ordering(642) 00:17:12.650 fused_ordering(643) 00:17:12.650 fused_ordering(644) 00:17:12.650 fused_ordering(645) 00:17:12.650 fused_ordering(646) 00:17:12.650 fused_ordering(647) 00:17:12.650 fused_ordering(648) 00:17:12.650 fused_ordering(649) 00:17:12.650 fused_ordering(650) 00:17:12.650 fused_ordering(651) 00:17:12.650 fused_ordering(652) 00:17:12.650 fused_ordering(653) 00:17:12.650 fused_ordering(654) 00:17:12.650 fused_ordering(655) 00:17:12.650 fused_ordering(656) 00:17:12.650 fused_ordering(657) 00:17:12.650 fused_ordering(658) 00:17:12.650 fused_ordering(659) 00:17:12.650 fused_ordering(660) 00:17:12.650 fused_ordering(661) 00:17:12.650 fused_ordering(662) 00:17:12.650 fused_ordering(663) 00:17:12.650 fused_ordering(664) 00:17:12.650 fused_ordering(665) 00:17:12.650 fused_ordering(666) 00:17:12.650 fused_ordering(667) 00:17:12.650 fused_ordering(668) 00:17:12.650 fused_ordering(669) 00:17:12.650 fused_ordering(670) 00:17:12.650 fused_ordering(671) 00:17:12.650 fused_ordering(672) 00:17:12.650 fused_ordering(673) 00:17:12.650 fused_ordering(674) 00:17:12.650 fused_ordering(675) 00:17:12.650 fused_ordering(676) 00:17:12.650 fused_ordering(677) 00:17:12.650 fused_ordering(678) 00:17:12.650 fused_ordering(679) 00:17:12.650 fused_ordering(680) 00:17:12.650 fused_ordering(681) 00:17:12.650 fused_ordering(682) 00:17:12.650 fused_ordering(683) 00:17:12.650 fused_ordering(684) 00:17:12.650 fused_ordering(685) 00:17:12.650 fused_ordering(686) 00:17:12.650 fused_ordering(687) 00:17:12.650 fused_ordering(688) 00:17:12.650 fused_ordering(689) 00:17:12.650 fused_ordering(690) 00:17:12.650 fused_ordering(691) 00:17:12.650 fused_ordering(692) 00:17:12.650 fused_ordering(693) 00:17:12.650 fused_ordering(694) 00:17:12.650 fused_ordering(695) 00:17:12.650 fused_ordering(696) 00:17:12.650 fused_ordering(697) 00:17:12.650 fused_ordering(698) 00:17:12.650 fused_ordering(699) 00:17:12.650 fused_ordering(700) 00:17:12.650 fused_ordering(701) 00:17:12.650 fused_ordering(702) 00:17:12.650 fused_ordering(703) 00:17:12.650 fused_ordering(704) 00:17:12.650 fused_ordering(705) 00:17:12.650 fused_ordering(706) 00:17:12.650 fused_ordering(707) 00:17:12.650 fused_ordering(708) 00:17:12.650 fused_ordering(709) 00:17:12.650 fused_ordering(710) 00:17:12.650 fused_ordering(711) 00:17:12.650 fused_ordering(712) 00:17:12.650 fused_ordering(713) 00:17:12.650 fused_ordering(714) 00:17:12.650 fused_ordering(715) 00:17:12.650 fused_ordering(716) 00:17:12.650 fused_ordering(717) 00:17:12.650 fused_ordering(718) 00:17:12.650 fused_ordering(719) 00:17:12.650 fused_ordering(720) 00:17:12.650 fused_ordering(721) 00:17:12.650 fused_ordering(722) 00:17:12.650 fused_ordering(723) 00:17:12.650 fused_ordering(724) 00:17:12.650 fused_ordering(725) 00:17:12.650 fused_ordering(726) 00:17:12.650 fused_ordering(727) 00:17:12.650 fused_ordering(728) 00:17:12.650 fused_ordering(729) 00:17:12.650 fused_ordering(730) 00:17:12.650 fused_ordering(731) 00:17:12.650 fused_ordering(732) 00:17:12.650 fused_ordering(733) 00:17:12.650 fused_ordering(734) 00:17:12.650 fused_ordering(735) 00:17:12.650 fused_ordering(736) 00:17:12.650 fused_ordering(737) 00:17:12.650 fused_ordering(738) 00:17:12.650 fused_ordering(739) 00:17:12.650 fused_ordering(740) 00:17:12.650 fused_ordering(741) 00:17:12.650 fused_ordering(742) 00:17:12.650 fused_ordering(743) 00:17:12.650 fused_ordering(744) 00:17:12.650 fused_ordering(745) 00:17:12.650 fused_ordering(746) 00:17:12.650 fused_ordering(747) 00:17:12.650 fused_ordering(748) 00:17:12.650 fused_ordering(749) 00:17:12.650 fused_ordering(750) 00:17:12.650 fused_ordering(751) 00:17:12.650 fused_ordering(752) 00:17:12.650 fused_ordering(753) 00:17:12.650 fused_ordering(754) 00:17:12.650 fused_ordering(755) 00:17:12.650 fused_ordering(756) 00:17:12.650 fused_ordering(757) 00:17:12.650 fused_ordering(758) 00:17:12.650 fused_ordering(759) 00:17:12.650 fused_ordering(760) 00:17:12.650 fused_ordering(761) 00:17:12.650 fused_ordering(762) 00:17:12.650 fused_ordering(763) 00:17:12.650 fused_ordering(764) 00:17:12.650 fused_ordering(765) 00:17:12.650 fused_ordering(766) 00:17:12.650 fused_ordering(767) 00:17:12.650 fused_ordering(768) 00:17:12.650 fused_ordering(769) 00:17:12.650 fused_ordering(770) 00:17:12.650 fused_ordering(771) 00:17:12.650 fused_ordering(772) 00:17:12.650 fused_ordering(773) 00:17:12.650 fused_ordering(774) 00:17:12.650 fused_ordering(775) 00:17:12.650 fused_ordering(776) 00:17:12.650 fused_ordering(777) 00:17:12.650 fused_ordering(778) 00:17:12.650 fused_ordering(779) 00:17:12.650 fused_ordering(780) 00:17:12.650 fused_ordering(781) 00:17:12.650 fused_ordering(782) 00:17:12.650 fused_ordering(783) 00:17:12.650 fused_ordering(784) 00:17:12.650 fused_ordering(785) 00:17:12.650 fused_ordering(786) 00:17:12.650 fused_ordering(787) 00:17:12.650 fused_ordering(788) 00:17:12.650 fused_ordering(789) 00:17:12.650 fused_ordering(790) 00:17:12.650 fused_ordering(791) 00:17:12.650 fused_ordering(792) 00:17:12.650 fused_ordering(793) 00:17:12.650 fused_ordering(794) 00:17:12.650 fused_ordering(795) 00:17:12.650 fused_ordering(796) 00:17:12.650 fused_ordering(797) 00:17:12.650 fused_ordering(798) 00:17:12.650 fused_ordering(799) 00:17:12.650 fused_ordering(800) 00:17:12.650 fused_ordering(801) 00:17:12.650 fused_ordering(802) 00:17:12.650 fused_ordering(803) 00:17:12.650 fused_ordering(804) 00:17:12.650 fused_ordering(805) 00:17:12.650 fused_ordering(806) 00:17:12.650 fused_ordering(807) 00:17:12.650 fused_ordering(808) 00:17:12.650 fused_ordering(809) 00:17:12.650 fused_ordering(810) 00:17:12.650 fused_ordering(811) 00:17:12.650 fused_ordering(812) 00:17:12.650 fused_ordering(813) 00:17:12.650 fused_ordering(814) 00:17:12.650 fused_ordering(815) 00:17:12.650 fused_ordering(816) 00:17:12.650 fused_ordering(817) 00:17:12.650 fused_ordering(818) 00:17:12.650 fused_ordering(819) 00:17:12.650 fused_ordering(820) 00:17:13.216 fused_ordering(821) 00:17:13.216 fused_ordering(822) 00:17:13.216 fused_ordering(823) 00:17:13.216 fused_ordering(824) 00:17:13.216 fused_ordering(825) 00:17:13.216 fused_ordering(826) 00:17:13.216 fused_ordering(827) 00:17:13.216 fused_ordering(828) 00:17:13.216 fused_ordering(829) 00:17:13.216 fused_ordering(830) 00:17:13.216 fused_ordering(831) 00:17:13.216 fused_ordering(832) 00:17:13.216 fused_ordering(833) 00:17:13.216 fused_ordering(834) 00:17:13.216 fused_ordering(835) 00:17:13.216 fused_ordering(836) 00:17:13.216 fused_ordering(837) 00:17:13.216 fused_ordering(838) 00:17:13.216 fused_ordering(839) 00:17:13.216 fused_ordering(840) 00:17:13.216 fused_ordering(841) 00:17:13.216 fused_ordering(842) 00:17:13.216 fused_ordering(843) 00:17:13.216 fused_ordering(844) 00:17:13.216 fused_ordering(845) 00:17:13.216 fused_ordering(846) 00:17:13.216 fused_ordering(847) 00:17:13.216 fused_ordering(848) 00:17:13.216 fused_ordering(849) 00:17:13.216 fused_ordering(850) 00:17:13.216 fused_ordering(851) 00:17:13.216 fused_ordering(852) 00:17:13.216 fused_ordering(853) 00:17:13.216 fused_ordering(854) 00:17:13.216 fused_ordering(855) 00:17:13.216 fused_ordering(856) 00:17:13.216 fused_ordering(857) 00:17:13.216 fused_ordering(858) 00:17:13.216 fused_ordering(859) 00:17:13.216 fused_ordering(860) 00:17:13.216 fused_ordering(861) 00:17:13.216 fused_ordering(862) 00:17:13.216 fused_ordering(863) 00:17:13.216 fused_ordering(864) 00:17:13.216 fused_ordering(865) 00:17:13.216 fused_ordering(866) 00:17:13.216 fused_ordering(867) 00:17:13.216 fused_ordering(868) 00:17:13.216 fused_ordering(869) 00:17:13.216 fused_ordering(870) 00:17:13.216 fused_ordering(871) 00:17:13.216 fused_ordering(872) 00:17:13.216 fused_ordering(873) 00:17:13.216 fused_ordering(874) 00:17:13.216 fused_ordering(875) 00:17:13.216 fused_ordering(876) 00:17:13.216 fused_ordering(877) 00:17:13.216 fused_ordering(878) 00:17:13.216 fused_ordering(879) 00:17:13.216 fused_ordering(880) 00:17:13.216 fused_ordering(881) 00:17:13.216 fused_ordering(882) 00:17:13.216 fused_ordering(883) 00:17:13.216 fused_ordering(884) 00:17:13.216 fused_ordering(885) 00:17:13.216 fused_ordering(886) 00:17:13.216 fused_ordering(887) 00:17:13.216 fused_ordering(888) 00:17:13.216 fused_ordering(889) 00:17:13.216 fused_ordering(890) 00:17:13.216 fused_ordering(891) 00:17:13.216 fused_ordering(892) 00:17:13.216 fused_ordering(893) 00:17:13.216 fused_ordering(894) 00:17:13.216 fused_ordering(895) 00:17:13.216 fused_ordering(896) 00:17:13.216 fused_ordering(897) 00:17:13.216 fused_ordering(898) 00:17:13.216 fused_ordering(899) 00:17:13.216 fused_ordering(900) 00:17:13.216 fused_ordering(901) 00:17:13.216 fused_ordering(902) 00:17:13.216 fused_ordering(903) 00:17:13.216 fused_ordering(904) 00:17:13.216 fused_ordering(905) 00:17:13.216 fused_ordering(906) 00:17:13.216 fused_ordering(907) 00:17:13.216 fused_ordering(908) 00:17:13.216 fused_ordering(909) 00:17:13.216 fused_ordering(910) 00:17:13.216 fused_ordering(911) 00:17:13.216 fused_ordering(912) 00:17:13.216 fused_ordering(913) 00:17:13.216 fused_ordering(914) 00:17:13.216 fused_ordering(915) 00:17:13.216 fused_ordering(916) 00:17:13.216 fused_ordering(917) 00:17:13.216 fused_ordering(918) 00:17:13.216 fused_ordering(919) 00:17:13.216 fused_ordering(920) 00:17:13.216 fused_ordering(921) 00:17:13.216 fused_ordering(922) 00:17:13.216 fused_ordering(923) 00:17:13.216 fused_ordering(924) 00:17:13.216 fused_ordering(925) 00:17:13.216 fused_ordering(926) 00:17:13.216 fused_ordering(927) 00:17:13.216 fused_ordering(928) 00:17:13.216 fused_ordering(929) 00:17:13.216 fused_ordering(930) 00:17:13.216 fused_ordering(931) 00:17:13.216 fused_ordering(932) 00:17:13.216 fused_ordering(933) 00:17:13.216 fused_ordering(934) 00:17:13.216 fused_ordering(935) 00:17:13.216 fused_ordering(936) 00:17:13.216 fused_ordering(937) 00:17:13.216 fused_ordering(938) 00:17:13.216 fused_ordering(939) 00:17:13.216 fused_ordering(940) 00:17:13.216 fused_ordering(941) 00:17:13.216 fused_ordering(942) 00:17:13.216 fused_ordering(943) 00:17:13.216 fused_ordering(944) 00:17:13.216 fused_ordering(945) 00:17:13.216 fused_ordering(946) 00:17:13.216 fused_ordering(947) 00:17:13.216 fused_ordering(948) 00:17:13.216 fused_ordering(949) 00:17:13.216 fused_ordering(950) 00:17:13.216 fused_ordering(951) 00:17:13.216 fused_ordering(952) 00:17:13.216 fused_ordering(953) 00:17:13.216 fused_ordering(954) 00:17:13.216 fused_ordering(955) 00:17:13.216 fused_ordering(956) 00:17:13.216 fused_ordering(957) 00:17:13.216 fused_ordering(958) 00:17:13.216 fused_ordering(959) 00:17:13.216 fused_ordering(960) 00:17:13.216 fused_ordering(961) 00:17:13.216 fused_ordering(962) 00:17:13.216 fused_ordering(963) 00:17:13.216 fused_ordering(964) 00:17:13.216 fused_ordering(965) 00:17:13.216 fused_ordering(966) 00:17:13.216 fused_ordering(967) 00:17:13.216 fused_ordering(968) 00:17:13.216 fused_ordering(969) 00:17:13.216 fused_ordering(970) 00:17:13.216 fused_ordering(971) 00:17:13.216 fused_ordering(972) 00:17:13.216 fused_ordering(973) 00:17:13.216 fused_ordering(974) 00:17:13.216 fused_ordering(975) 00:17:13.216 fused_ordering(976) 00:17:13.216 fused_ordering(977) 00:17:13.216 fused_ordering(978) 00:17:13.216 fused_ordering(979) 00:17:13.216 fused_ordering(980) 00:17:13.216 fused_ordering(981) 00:17:13.216 fused_ordering(982) 00:17:13.216 fused_ordering(983) 00:17:13.216 fused_ordering(984) 00:17:13.216 fused_ordering(985) 00:17:13.216 fused_ordering(986) 00:17:13.216 fused_ordering(987) 00:17:13.216 fused_ordering(988) 00:17:13.216 fused_ordering(989) 00:17:13.216 fused_ordering(990) 00:17:13.216 fused_ordering(991) 00:17:13.216 fused_ordering(992) 00:17:13.216 fused_ordering(993) 00:17:13.216 fused_ordering(994) 00:17:13.216 fused_ordering(995) 00:17:13.216 fused_ordering(996) 00:17:13.216 fused_ordering(997) 00:17:13.216 fused_ordering(998) 00:17:13.216 fused_ordering(999) 00:17:13.216 fused_ordering(1000) 00:17:13.216 fused_ordering(1001) 00:17:13.217 fused_ordering(1002) 00:17:13.217 fused_ordering(1003) 00:17:13.217 fused_ordering(1004) 00:17:13.217 fused_ordering(1005) 00:17:13.217 fused_ordering(1006) 00:17:13.217 fused_ordering(1007) 00:17:13.217 fused_ordering(1008) 00:17:13.217 fused_ordering(1009) 00:17:13.217 fused_ordering(1010) 00:17:13.217 fused_ordering(1011) 00:17:13.217 fused_ordering(1012) 00:17:13.217 fused_ordering(1013) 00:17:13.217 fused_ordering(1014) 00:17:13.217 fused_ordering(1015) 00:17:13.217 fused_ordering(1016) 00:17:13.217 fused_ordering(1017) 00:17:13.217 fused_ordering(1018) 00:17:13.217 fused_ordering(1019) 00:17:13.217 fused_ordering(1020) 00:17:13.217 fused_ordering(1021) 00:17:13.217 fused_ordering(1022) 00:17:13.217 fused_ordering(1023) 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:13.217 rmmod nvme_tcp 00:17:13.217 rmmod nvme_fabrics 00:17:13.217 rmmod nvme_keyring 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 220191 ']' 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 220191 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 220191 ']' 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 220191 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 220191 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 220191' 00:17:13.217 killing process with pid 220191 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 220191 00:17:13.217 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 220191 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.475 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.382 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:15.382 00:17:15.382 real 0m7.216s 00:17:15.382 user 0m4.821s 00:17:15.382 sys 0m2.775s 00:17:15.382 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.382 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 ************************************ 00:17:15.382 END TEST nvmf_fused_ordering 00:17:15.382 ************************************ 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 ************************************ 00:17:15.640 START TEST nvmf_ns_masking 00:17:15.640 ************************************ 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:15.640 * Looking for test storage... 00:17:15.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.640 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:15.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.641 --rc genhtml_branch_coverage=1 00:17:15.641 --rc genhtml_function_coverage=1 00:17:15.641 --rc genhtml_legend=1 00:17:15.641 --rc geninfo_all_blocks=1 00:17:15.641 --rc geninfo_unexecuted_blocks=1 00:17:15.641 00:17:15.641 ' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:15.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.641 --rc genhtml_branch_coverage=1 00:17:15.641 --rc genhtml_function_coverage=1 00:17:15.641 --rc genhtml_legend=1 00:17:15.641 --rc geninfo_all_blocks=1 00:17:15.641 --rc geninfo_unexecuted_blocks=1 00:17:15.641 00:17:15.641 ' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:15.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.641 --rc genhtml_branch_coverage=1 00:17:15.641 --rc genhtml_function_coverage=1 00:17:15.641 --rc genhtml_legend=1 00:17:15.641 --rc geninfo_all_blocks=1 00:17:15.641 --rc geninfo_unexecuted_blocks=1 00:17:15.641 00:17:15.641 ' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:15.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.641 --rc genhtml_branch_coverage=1 00:17:15.641 --rc genhtml_function_coverage=1 00:17:15.641 --rc genhtml_legend=1 00:17:15.641 --rc geninfo_all_blocks=1 00:17:15.641 --rc geninfo_unexecuted_blocks=1 00:17:15.641 00:17:15.641 ' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:15.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7a3d625e-0b9d-4035-b880-aa537c7accba 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=154aa230-aebd-4020-82a9-ed7de8688b88 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7e448350-3d9b-4492-97cb-6d23c1ecd1b5 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:15.641 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.178 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:17:18.178 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:18.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:17:18.179 00:17:18.179 --- 10.0.0.2 ping statistics --- 00:17:18.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.179 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:17:18.179 00:17:18.179 --- 10.0.0.1 ping statistics --- 00:17:18.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.179 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=222477 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 222477 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 222477 ']' 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.179 [2024-10-14 13:28:09.742614] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:18.179 [2024-10-14 13:28:09.742690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.179 [2024-10-14 13:28:09.808122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.179 [2024-10-14 13:28:09.854627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.179 [2024-10-14 13:28:09.854673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.179 [2024-10-14 13:28:09.854701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.179 [2024-10-14 13:28:09.854712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.179 [2024-10-14 13:28:09.854722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.179 [2024-10-14 13:28:09.855278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.179 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:18.437 [2024-10-14 13:28:10.246855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.437 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:18.437 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:18.437 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:18.695 Malloc1 00:17:18.695 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:19.263 Malloc2 00:17:19.263 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:19.520 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:19.779 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.779 [2024-10-14 13:28:11.625751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e448350-3d9b-4492-97cb-6d23c1ecd1b5 -a 10.0.0.2 -s 4420 -i 4 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:20.037 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:21.936 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:21.936 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:21.936 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.194 [ 0]:0x1 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1545f1bcb36b47449fc9136c1f16434c 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1545f1bcb36b47449fc9136c1f16434c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.194 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:22.453 [ 0]:0x1 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1545f1bcb36b47449fc9136c1f16434c 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1545f1bcb36b47449fc9136c1f16434c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:22.453 [ 1]:0x2 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:22.453 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.711 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.969 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:23.227 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:23.227 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e448350-3d9b-4492-97cb-6d23c1ecd1b5 -a 10.0.0.2 -s 4420 -i 4 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:23.485 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.383 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.641 [ 0]:0x2 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.641 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:25.900 [ 0]:0x1 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1545f1bcb36b47449fc9136c1f16434c 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1545f1bcb36b47449fc9136c1f16434c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:25.900 [ 1]:0x2 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.900 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:26.158 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:26.159 [ 0]:0x2 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:26.159 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:26.416 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:26.416 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:26.416 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:26.416 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.416 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.674 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:26.674 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7e448350-3d9b-4492-97cb-6d23c1ecd1b5 -a 10.0.0.2 -s 4420 -i 4 00:17:26.674 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:26.675 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.675 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.675 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:26.675 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:26.675 13:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.206 [ 0]:0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1545f1bcb36b47449fc9136c1f16434c 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1545f1bcb36b47449fc9136c1f16434c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.206 [ 1]:0x2 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:29.206 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.207 [ 0]:0x2 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:29.207 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:29.465 [2024-10-14 13:28:21.222454] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:29.465 request: 00:17:29.465 { 00:17:29.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.465 "nsid": 2, 00:17:29.465 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.465 "method": "nvmf_ns_remove_host", 00:17:29.465 "req_id": 1 00:17:29.465 } 00:17:29.465 Got JSON-RPC error response 00:17:29.465 response: 00:17:29.465 { 00:17:29.465 "code": -32602, 00:17:29.465 "message": "Invalid parameters" 00:17:29.465 } 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:29.465 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:29.723 [ 0]:0x2 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a3aed5a09553451082f88ef0e85bf7a4 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a3aed5a09553451082f88ef0e85bf7a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:29.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224042 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224042 /var/tmp/host.sock 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 224042 ']' 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.723 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:29.982 [2024-10-14 13:28:21.580418] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:29.982 [2024-10-14 13:28:21.580499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224042 ] 00:17:29.982 [2024-10-14 13:28:21.639753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.982 [2024-10-14 13:28:21.684463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.240 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.240 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:30.240 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.498 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:30.756 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7a3d625e-0b9d-4035-b880-aa537c7accba 00:17:30.756 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:30.756 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7A3D625E0B9D4035B880AA537C7ACCBA -i 00:17:31.014 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 154aa230-aebd-4020-82a9-ed7de8688b88 00:17:31.014 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:31.014 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 154AA230AEBD402082A9ED7DE8688B88 -i 00:17:31.271 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:31.529 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:31.787 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:31.787 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:32.352 nvme0n1 00:17:32.352 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:32.352 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:32.610 nvme1n2 00:17:32.610 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:32.610 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:32.610 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:32.610 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:32.610 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:32.868 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:32.868 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:32.868 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:32.868 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:33.126 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7a3d625e-0b9d-4035-b880-aa537c7accba == \7\a\3\d\6\2\5\e\-\0\b\9\d\-\4\0\3\5\-\b\8\8\0\-\a\a\5\3\7\c\7\a\c\c\b\a ]] 00:17:33.126 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:33.126 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:33.126 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 154aa230-aebd-4020-82a9-ed7de8688b88 == \1\5\4\a\a\2\3\0\-\a\e\b\d\-\4\0\2\0\-\8\2\a\9\-\e\d\7\d\e\8\6\8\8\b\8\8 ]] 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 224042 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 224042 ']' 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 224042 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.384 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 224042 00:17:33.642 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:33.642 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:33.642 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 224042' 00:17:33.642 killing process with pid 224042 00:17:33.642 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 224042 00:17:33.642 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 224042 00:17:33.900 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.158 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.159 rmmod nvme_tcp 00:17:34.159 rmmod nvme_fabrics 00:17:34.159 rmmod nvme_keyring 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 222477 ']' 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 222477 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 222477 ']' 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 222477 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.159 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 222477 00:17:34.417 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.417 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.417 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 222477' 00:17:34.417 killing process with pid 222477 00:17:34.417 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 222477 00:17:34.417 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 222477 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.677 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.587 00:17:36.587 real 0m21.055s 00:17:36.587 user 0m27.600s 00:17:36.587 sys 0m4.194s 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:36.587 ************************************ 00:17:36.587 END TEST nvmf_ns_masking 00:17:36.587 ************************************ 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.587 ************************************ 00:17:36.587 START TEST nvmf_nvme_cli 00:17:36.587 ************************************ 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:36.587 * Looking for test storage... 00:17:36.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:36.587 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:36.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.848 --rc genhtml_branch_coverage=1 00:17:36.848 --rc genhtml_function_coverage=1 00:17:36.848 --rc genhtml_legend=1 00:17:36.848 --rc geninfo_all_blocks=1 00:17:36.848 --rc geninfo_unexecuted_blocks=1 00:17:36.848 00:17:36.848 ' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.848 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.849 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:39.385 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:39.385 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:39.385 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:39.385 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:17:39.385 00:17:39.385 --- 10.0.0.2 ping statistics --- 00:17:39.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.385 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:39.385 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:17:39.386 00:17:39.386 --- 10.0.0.1 ping statistics --- 00:17:39.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.386 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=226538 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 226538 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 226538 ']' 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.386 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 [2024-10-14 13:28:30.870163] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:39.386 [2024-10-14 13:28:30.870250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.386 [2024-10-14 13:28:30.940146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.386 [2024-10-14 13:28:30.990800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.386 [2024-10-14 13:28:30.990861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.386 [2024-10-14 13:28:30.990875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.386 [2024-10-14 13:28:30.990886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.386 [2024-10-14 13:28:30.990895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.386 [2024-10-14 13:28:30.992577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.386 [2024-10-14 13:28:30.992636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.386 [2024-10-14 13:28:30.992705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.386 [2024-10-14 13:28:30.992708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 [2024-10-14 13:28:31.144072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 Malloc0 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 Malloc1 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.386 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.644 [2024-10-14 13:28:31.239909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:39.644 00:17:39.644 Discovery Log Number of Records 2, Generation counter 2 00:17:39.644 =====Discovery Log Entry 0====== 00:17:39.644 trtype: tcp 00:17:39.644 adrfam: ipv4 00:17:39.644 subtype: current discovery subsystem 00:17:39.644 treq: not required 00:17:39.644 portid: 0 00:17:39.644 trsvcid: 4420 00:17:39.644 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.644 traddr: 10.0.0.2 00:17:39.644 eflags: explicit discovery connections, duplicate discovery information 00:17:39.644 sectype: none 00:17:39.644 =====Discovery Log Entry 1====== 00:17:39.644 trtype: tcp 00:17:39.644 adrfam: ipv4 00:17:39.644 subtype: nvme subsystem 00:17:39.644 treq: not required 00:17:39.644 portid: 0 00:17:39.644 trsvcid: 4420 00:17:39.644 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:39.644 traddr: 10.0.0.2 00:17:39.644 eflags: none 00:17:39.644 sectype: none 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:39.644 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:40.577 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:42.573 /dev/nvme0n2 ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.573 rmmod nvme_tcp 00:17:42.573 rmmod nvme_fabrics 00:17:42.573 rmmod nvme_keyring 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 226538 ']' 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 226538 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 226538 ']' 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 226538 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226538 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226538' 00:17:42.573 killing process with pid 226538 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 226538 00:17:42.573 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 226538 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.843 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.768 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.768 00:17:44.768 real 0m8.240s 00:17:44.768 user 0m14.942s 00:17:44.768 sys 0m2.282s 00:17:44.768 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.768 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.768 ************************************ 00:17:44.768 END TEST nvmf_nvme_cli 00:17:44.768 ************************************ 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.028 ************************************ 00:17:45.028 START TEST nvmf_vfio_user 00:17:45.028 ************************************ 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:45.028 * Looking for test storage... 00:17:45.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.028 --rc genhtml_branch_coverage=1 00:17:45.028 --rc genhtml_function_coverage=1 00:17:45.028 --rc genhtml_legend=1 00:17:45.028 --rc geninfo_all_blocks=1 00:17:45.028 --rc geninfo_unexecuted_blocks=1 00:17:45.028 00:17:45.028 ' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.028 --rc genhtml_branch_coverage=1 00:17:45.028 --rc genhtml_function_coverage=1 00:17:45.028 --rc genhtml_legend=1 00:17:45.028 --rc geninfo_all_blocks=1 00:17:45.028 --rc geninfo_unexecuted_blocks=1 00:17:45.028 00:17:45.028 ' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.028 --rc genhtml_branch_coverage=1 00:17:45.028 --rc genhtml_function_coverage=1 00:17:45.028 --rc genhtml_legend=1 00:17:45.028 --rc geninfo_all_blocks=1 00:17:45.028 --rc geninfo_unexecuted_blocks=1 00:17:45.028 00:17:45.028 ' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.028 --rc genhtml_branch_coverage=1 00:17:45.028 --rc genhtml_function_coverage=1 00:17:45.028 --rc genhtml_legend=1 00:17:45.028 --rc geninfo_all_blocks=1 00:17:45.028 --rc geninfo_unexecuted_blocks=1 00:17:45.028 00:17:45.028 ' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.028 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=227355 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 227355' 00:17:45.029 Process pid: 227355 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 227355 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 227355 ']' 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.029 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:45.287 [2024-10-14 13:28:36.898120] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:45.287 [2024-10-14 13:28:36.898238] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.287 [2024-10-14 13:28:36.959234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.287 [2024-10-14 13:28:37.007708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.287 [2024-10-14 13:28:37.007770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.287 [2024-10-14 13:28:37.007798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.287 [2024-10-14 13:28:37.007809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.287 [2024-10-14 13:28:37.007818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.287 [2024-10-14 13:28:37.009359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.287 [2024-10-14 13:28:37.009389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.287 [2024-10-14 13:28:37.009452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.287 [2024-10-14 13:28:37.009455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.544 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.544 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:45.544 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:46.476 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:46.734 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:46.734 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:46.734 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.734 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:46.734 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:46.993 Malloc1 00:17:46.993 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:47.250 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:47.507 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:47.764 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.764 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:47.764 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:48.330 Malloc2 00:17:48.330 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:48.330 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:48.896 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:49.156 [2024-10-14 13:28:40.764813] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:17:49.156 [2024-10-14 13:28:40.764856] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227902 ] 00:17:49.156 [2024-10-14 13:28:40.800554] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:49.156 [2024-10-14 13:28:40.808621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:49.156 [2024-10-14 13:28:40.808651] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5528aa5000 00:17:49.156 [2024-10-14 13:28:40.809618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.810613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.811634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.812621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.813625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.814628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.815633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.816636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:49.156 [2024-10-14 13:28:40.817648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:49.156 [2024-10-14 13:28:40.817670] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f552779d000 00:17:49.156 [2024-10-14 13:28:40.818789] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:49.156 [2024-10-14 13:28:40.834773] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:49.156 [2024-10-14 13:28:40.834815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:49.156 [2024-10-14 13:28:40.839789] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:49.156 [2024-10-14 13:28:40.839846] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:49.156 [2024-10-14 13:28:40.839946] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:49.156 [2024-10-14 13:28:40.839980] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:49.156 [2024-10-14 13:28:40.839991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:49.156 [2024-10-14 13:28:40.840778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:49.156 [2024-10-14 13:28:40.840800] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:49.156 [2024-10-14 13:28:40.840813] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:49.156 [2024-10-14 13:28:40.841776] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:49.156 [2024-10-14 13:28:40.841795] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:49.156 [2024-10-14 13:28:40.841809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:49.156 [2024-10-14 13:28:40.842784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:49.156 [2024-10-14 13:28:40.842802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:49.156 [2024-10-14 13:28:40.843793] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:49.156 [2024-10-14 13:28:40.843812] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:49.156 [2024-10-14 13:28:40.843821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:49.156 [2024-10-14 13:28:40.843832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:49.156 [2024-10-14 13:28:40.843942] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:49.157 [2024-10-14 13:28:40.843949] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:49.157 [2024-10-14 13:28:40.843962] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:49.157 [2024-10-14 13:28:40.844800] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:49.157 [2024-10-14 13:28:40.845802] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:49.157 [2024-10-14 13:28:40.846807] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:49.157 [2024-10-14 13:28:40.847801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:49.157 [2024-10-14 13:28:40.847912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:49.157 [2024-10-14 13:28:40.848820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:49.157 [2024-10-14 13:28:40.848837] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:49.157 [2024-10-14 13:28:40.848846] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.848869] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:49.157 [2024-10-14 13:28:40.848883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.848916] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:49.157 [2024-10-14 13:28:40.848926] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:49.157 [2024-10-14 13:28:40.848932] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.157 [2024-10-14 13:28:40.848955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849028] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:49.157 [2024-10-14 13:28:40.849036] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:49.157 [2024-10-14 13:28:40.849043] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:49.157 [2024-10-14 13:28:40.849050] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:49.157 [2024-10-14 13:28:40.849058] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:49.157 [2024-10-14 13:28:40.849066] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:49.157 [2024-10-14 13:28:40.849073] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.157 [2024-10-14 13:28:40.849199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.157 [2024-10-14 13:28:40.849211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.157 [2024-10-14 13:28:40.849223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.157 [2024-10-14 13:28:40.849232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849288] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:49.157 [2024-10-14 13:28:40.849297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849435] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849452] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849466] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:49.157 [2024-10-14 13:28:40.849474] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:49.157 [2024-10-14 13:28:40.849480] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.157 [2024-10-14 13:28:40.849503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849535] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:49.157 [2024-10-14 13:28:40.849556] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849582] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:49.157 [2024-10-14 13:28:40.849593] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:49.157 [2024-10-14 13:28:40.849600] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.157 [2024-10-14 13:28:40.849609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849663] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:49.157 [2024-10-14 13:28:40.849689] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:49.157 [2024-10-14 13:28:40.849697] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:49.157 [2024-10-14 13:28:40.849702] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.157 [2024-10-14 13:28:40.849711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:49.157 [2024-10-14 13:28:40.849729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:49.157 [2024-10-14 13:28:40.849743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849796] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849805] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:49.158 [2024-10-14 13:28:40.849812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:49.158 [2024-10-14 13:28:40.849820] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:49.158 [2024-10-14 13:28:40.849848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.849881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.849892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.849908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.849919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.849939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.849952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.849974] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:49.158 [2024-10-14 13:28:40.849984] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:49.158 [2024-10-14 13:28:40.849990] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:49.158 [2024-10-14 13:28:40.849996] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:49.158 [2024-10-14 13:28:40.850002] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:49.158 [2024-10-14 13:28:40.850011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:49.158 [2024-10-14 13:28:40.850022] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:49.158 [2024-10-14 13:28:40.850029] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:49.158 [2024-10-14 13:28:40.850035] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.158 [2024-10-14 13:28:40.850043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.850053] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:49.158 [2024-10-14 13:28:40.850061] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:49.158 [2024-10-14 13:28:40.850066] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.158 [2024-10-14 13:28:40.850074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.850086] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:49.158 [2024-10-14 13:28:40.850094] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:49.158 [2024-10-14 13:28:40.850099] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:49.158 [2024-10-14 13:28:40.850107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:49.158 [2024-10-14 13:28:40.850140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.850164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.850183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:49.158 [2024-10-14 13:28:40.850195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:49.158 ===================================================== 00:17:49.158 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:49.158 ===================================================== 00:17:49.158 Controller Capabilities/Features 00:17:49.158 ================================ 00:17:49.158 Vendor ID: 4e58 00:17:49.158 Subsystem Vendor ID: 4e58 00:17:49.158 Serial Number: SPDK1 00:17:49.158 Model Number: SPDK bdev Controller 00:17:49.158 Firmware Version: 25.01 00:17:49.158 Recommended Arb Burst: 6 00:17:49.158 IEEE OUI Identifier: 8d 6b 50 00:17:49.158 Multi-path I/O 00:17:49.158 May have multiple subsystem ports: Yes 00:17:49.158 May have multiple controllers: Yes 00:17:49.158 Associated with SR-IOV VF: No 00:17:49.158 Max Data Transfer Size: 131072 00:17:49.158 Max Number of Namespaces: 32 00:17:49.158 Max Number of I/O Queues: 127 00:17:49.158 NVMe Specification Version (VS): 1.3 00:17:49.158 NVMe Specification Version (Identify): 1.3 00:17:49.158 Maximum Queue Entries: 256 00:17:49.158 Contiguous Queues Required: Yes 00:17:49.158 Arbitration Mechanisms Supported 00:17:49.158 Weighted Round Robin: Not Supported 00:17:49.158 Vendor Specific: Not Supported 00:17:49.158 Reset Timeout: 15000 ms 00:17:49.158 Doorbell Stride: 4 bytes 00:17:49.158 NVM Subsystem Reset: Not Supported 00:17:49.158 Command Sets Supported 00:17:49.158 NVM Command Set: Supported 00:17:49.158 Boot Partition: Not Supported 00:17:49.158 Memory Page Size Minimum: 4096 bytes 00:17:49.158 Memory Page Size Maximum: 4096 bytes 00:17:49.158 Persistent Memory Region: Not Supported 00:17:49.158 Optional Asynchronous Events Supported 00:17:49.158 Namespace Attribute Notices: Supported 00:17:49.158 Firmware Activation Notices: Not Supported 00:17:49.158 ANA Change Notices: Not Supported 00:17:49.158 PLE Aggregate Log Change Notices: Not Supported 00:17:49.158 LBA Status Info Alert Notices: Not Supported 00:17:49.158 EGE Aggregate Log Change Notices: Not Supported 00:17:49.158 Normal NVM Subsystem Shutdown event: Not Supported 00:17:49.158 Zone Descriptor Change Notices: Not Supported 00:17:49.158 Discovery Log Change Notices: Not Supported 00:17:49.158 Controller Attributes 00:17:49.158 128-bit Host Identifier: Supported 00:17:49.158 Non-Operational Permissive Mode: Not Supported 00:17:49.158 NVM Sets: Not Supported 00:17:49.158 Read Recovery Levels: Not Supported 00:17:49.158 Endurance Groups: Not Supported 00:17:49.158 Predictable Latency Mode: Not Supported 00:17:49.158 Traffic Based Keep ALive: Not Supported 00:17:49.158 Namespace Granularity: Not Supported 00:17:49.158 SQ Associations: Not Supported 00:17:49.158 UUID List: Not Supported 00:17:49.158 Multi-Domain Subsystem: Not Supported 00:17:49.158 Fixed Capacity Management: Not Supported 00:17:49.158 Variable Capacity Management: Not Supported 00:17:49.158 Delete Endurance Group: Not Supported 00:17:49.158 Delete NVM Set: Not Supported 00:17:49.158 Extended LBA Formats Supported: Not Supported 00:17:49.158 Flexible Data Placement Supported: Not Supported 00:17:49.158 00:17:49.158 Controller Memory Buffer Support 00:17:49.158 ================================ 00:17:49.158 Supported: No 00:17:49.158 00:17:49.158 Persistent Memory Region Support 00:17:49.158 ================================ 00:17:49.158 Supported: No 00:17:49.158 00:17:49.158 Admin Command Set Attributes 00:17:49.158 ============================ 00:17:49.158 Security Send/Receive: Not Supported 00:17:49.158 Format NVM: Not Supported 00:17:49.158 Firmware Activate/Download: Not Supported 00:17:49.158 Namespace Management: Not Supported 00:17:49.158 Device Self-Test: Not Supported 00:17:49.158 Directives: Not Supported 00:17:49.158 NVMe-MI: Not Supported 00:17:49.158 Virtualization Management: Not Supported 00:17:49.158 Doorbell Buffer Config: Not Supported 00:17:49.158 Get LBA Status Capability: Not Supported 00:17:49.158 Command & Feature Lockdown Capability: Not Supported 00:17:49.158 Abort Command Limit: 4 00:17:49.158 Async Event Request Limit: 4 00:17:49.158 Number of Firmware Slots: N/A 00:17:49.158 Firmware Slot 1 Read-Only: N/A 00:17:49.158 Firmware Activation Without Reset: N/A 00:17:49.158 Multiple Update Detection Support: N/A 00:17:49.158 Firmware Update Granularity: No Information Provided 00:17:49.158 Per-Namespace SMART Log: No 00:17:49.158 Asymmetric Namespace Access Log Page: Not Supported 00:17:49.158 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:49.158 Command Effects Log Page: Supported 00:17:49.158 Get Log Page Extended Data: Supported 00:17:49.159 Telemetry Log Pages: Not Supported 00:17:49.159 Persistent Event Log Pages: Not Supported 00:17:49.159 Supported Log Pages Log Page: May Support 00:17:49.159 Commands Supported & Effects Log Page: Not Supported 00:17:49.159 Feature Identifiers & Effects Log Page:May Support 00:17:49.159 NVMe-MI Commands & Effects Log Page: May Support 00:17:49.159 Data Area 4 for Telemetry Log: Not Supported 00:17:49.159 Error Log Page Entries Supported: 128 00:17:49.159 Keep Alive: Supported 00:17:49.159 Keep Alive Granularity: 10000 ms 00:17:49.159 00:17:49.159 NVM Command Set Attributes 00:17:49.159 ========================== 00:17:49.159 Submission Queue Entry Size 00:17:49.159 Max: 64 00:17:49.159 Min: 64 00:17:49.159 Completion Queue Entry Size 00:17:49.159 Max: 16 00:17:49.159 Min: 16 00:17:49.159 Number of Namespaces: 32 00:17:49.159 Compare Command: Supported 00:17:49.159 Write Uncorrectable Command: Not Supported 00:17:49.159 Dataset Management Command: Supported 00:17:49.159 Write Zeroes Command: Supported 00:17:49.159 Set Features Save Field: Not Supported 00:17:49.159 Reservations: Not Supported 00:17:49.159 Timestamp: Not Supported 00:17:49.159 Copy: Supported 00:17:49.159 Volatile Write Cache: Present 00:17:49.159 Atomic Write Unit (Normal): 1 00:17:49.159 Atomic Write Unit (PFail): 1 00:17:49.159 Atomic Compare & Write Unit: 1 00:17:49.159 Fused Compare & Write: Supported 00:17:49.159 Scatter-Gather List 00:17:49.159 SGL Command Set: Supported (Dword aligned) 00:17:49.159 SGL Keyed: Not Supported 00:17:49.159 SGL Bit Bucket Descriptor: Not Supported 00:17:49.159 SGL Metadata Pointer: Not Supported 00:17:49.159 Oversized SGL: Not Supported 00:17:49.159 SGL Metadata Address: Not Supported 00:17:49.159 SGL Offset: Not Supported 00:17:49.159 Transport SGL Data Block: Not Supported 00:17:49.159 Replay Protected Memory Block: Not Supported 00:17:49.159 00:17:49.159 Firmware Slot Information 00:17:49.159 ========================= 00:17:49.159 Active slot: 1 00:17:49.159 Slot 1 Firmware Revision: 25.01 00:17:49.159 00:17:49.159 00:17:49.159 Commands Supported and Effects 00:17:49.159 ============================== 00:17:49.159 Admin Commands 00:17:49.159 -------------- 00:17:49.159 Get Log Page (02h): Supported 00:17:49.159 Identify (06h): Supported 00:17:49.159 Abort (08h): Supported 00:17:49.159 Set Features (09h): Supported 00:17:49.159 Get Features (0Ah): Supported 00:17:49.159 Asynchronous Event Request (0Ch): Supported 00:17:49.159 Keep Alive (18h): Supported 00:17:49.159 I/O Commands 00:17:49.159 ------------ 00:17:49.159 Flush (00h): Supported LBA-Change 00:17:49.159 Write (01h): Supported LBA-Change 00:17:49.159 Read (02h): Supported 00:17:49.159 Compare (05h): Supported 00:17:49.159 Write Zeroes (08h): Supported LBA-Change 00:17:49.159 Dataset Management (09h): Supported LBA-Change 00:17:49.159 Copy (19h): Supported LBA-Change 00:17:49.159 00:17:49.159 Error Log 00:17:49.159 ========= 00:17:49.159 00:17:49.159 Arbitration 00:17:49.159 =========== 00:17:49.159 Arbitration Burst: 1 00:17:49.159 00:17:49.159 Power Management 00:17:49.159 ================ 00:17:49.159 Number of Power States: 1 00:17:49.159 Current Power State: Power State #0 00:17:49.159 Power State #0: 00:17:49.159 Max Power: 0.00 W 00:17:49.159 Non-Operational State: Operational 00:17:49.159 Entry Latency: Not Reported 00:17:49.159 Exit Latency: Not Reported 00:17:49.159 Relative Read Throughput: 0 00:17:49.159 Relative Read Latency: 0 00:17:49.159 Relative Write Throughput: 0 00:17:49.159 Relative Write Latency: 0 00:17:49.159 Idle Power: Not Reported 00:17:49.159 Active Power: Not Reported 00:17:49.159 Non-Operational Permissive Mode: Not Supported 00:17:49.159 00:17:49.159 Health Information 00:17:49.159 ================== 00:17:49.159 Critical Warnings: 00:17:49.159 Available Spare Space: OK 00:17:49.159 Temperature: OK 00:17:49.159 Device Reliability: OK 00:17:49.159 Read Only: No 00:17:49.159 Volatile Memory Backup: OK 00:17:49.159 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:49.159 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:49.159 Available Spare: 0% 00:17:49.159 Available Sp[2024-10-14 13:28:40.850327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:49.159 [2024-10-14 13:28:40.850344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:49.159 [2024-10-14 13:28:40.850394] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:49.159 [2024-10-14 13:28:40.850413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.159 [2024-10-14 13:28:40.850425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.159 [2024-10-14 13:28:40.850438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.159 [2024-10-14 13:28:40.850464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.159 [2024-10-14 13:28:40.854139] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:49.159 [2024-10-14 13:28:40.854164] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:49.159 [2024-10-14 13:28:40.854845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:49.159 [2024-10-14 13:28:40.854930] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:49.159 [2024-10-14 13:28:40.854944] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:49.159 [2024-10-14 13:28:40.855859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:49.159 [2024-10-14 13:28:40.855883] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:49.159 [2024-10-14 13:28:40.855945] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:49.159 [2024-10-14 13:28:40.857900] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:49.159 are Threshold: 0% 00:17:49.159 Life Percentage Used: 0% 00:17:49.159 Data Units Read: 0 00:17:49.159 Data Units Written: 0 00:17:49.159 Host Read Commands: 0 00:17:49.159 Host Write Commands: 0 00:17:49.159 Controller Busy Time: 0 minutes 00:17:49.159 Power Cycles: 0 00:17:49.159 Power On Hours: 0 hours 00:17:49.159 Unsafe Shutdowns: 0 00:17:49.159 Unrecoverable Media Errors: 0 00:17:49.159 Lifetime Error Log Entries: 0 00:17:49.159 Warning Temperature Time: 0 minutes 00:17:49.159 Critical Temperature Time: 0 minutes 00:17:49.159 00:17:49.159 Number of Queues 00:17:49.159 ================ 00:17:49.159 Number of I/O Submission Queues: 127 00:17:49.159 Number of I/O Completion Queues: 127 00:17:49.159 00:17:49.159 Active Namespaces 00:17:49.159 ================= 00:17:49.159 Namespace ID:1 00:17:49.159 Error Recovery Timeout: Unlimited 00:17:49.159 Command Set Identifier: NVM (00h) 00:17:49.159 Deallocate: Supported 00:17:49.159 Deallocated/Unwritten Error: Not Supported 00:17:49.159 Deallocated Read Value: Unknown 00:17:49.159 Deallocate in Write Zeroes: Not Supported 00:17:49.159 Deallocated Guard Field: 0xFFFF 00:17:49.159 Flush: Supported 00:17:49.159 Reservation: Supported 00:17:49.159 Namespace Sharing Capabilities: Multiple Controllers 00:17:49.159 Size (in LBAs): 131072 (0GiB) 00:17:49.159 Capacity (in LBAs): 131072 (0GiB) 00:17:49.159 Utilization (in LBAs): 131072 (0GiB) 00:17:49.159 NGUID: 00CEC097A9BE4E1999A5A017693737F3 00:17:49.159 UUID: 00cec097-a9be-4e19-99a5-a017693737f3 00:17:49.159 Thin Provisioning: Not Supported 00:17:49.159 Per-NS Atomic Units: Yes 00:17:49.159 Atomic Boundary Size (Normal): 0 00:17:49.159 Atomic Boundary Size (PFail): 0 00:17:49.159 Atomic Boundary Offset: 0 00:17:49.159 Maximum Single Source Range Length: 65535 00:17:49.159 Maximum Copy Length: 65535 00:17:49.159 Maximum Source Range Count: 1 00:17:49.159 NGUID/EUI64 Never Reused: No 00:17:49.160 Namespace Write Protected: No 00:17:49.160 Number of LBA Formats: 1 00:17:49.160 Current LBA Format: LBA Format #00 00:17:49.160 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:49.160 00:17:49.160 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:49.417 [2024-10-14 13:28:41.087999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.684 Initializing NVMe Controllers 00:17:54.684 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:54.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:54.684 Initialization complete. Launching workers. 00:17:54.684 ======================================================== 00:17:54.684 Latency(us) 00:17:54.684 Device Information : IOPS MiB/s Average min max 00:17:54.684 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33533.71 130.99 3816.95 1203.02 10076.77 00:17:54.684 ======================================================== 00:17:54.684 Total : 33533.71 130.99 3816.95 1203.02 10076.77 00:17:54.684 00:17:54.684 [2024-10-14 13:28:46.107235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.684 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:54.684 [2024-10-14 13:28:46.351415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:59.949 Initializing NVMe Controllers 00:17:59.949 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.949 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:59.949 Initialization complete. Launching workers. 00:17:59.949 ======================================================== 00:17:59.949 Latency(us) 00:17:59.949 Device Information : IOPS MiB/s Average min max 00:17:59.949 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15897.84 62.10 8056.64 6958.29 15989.69 00:17:59.949 ======================================================== 00:17:59.949 Total : 15897.84 62.10 8056.64 6958.29 15989.69 00:17:59.949 00:17:59.949 [2024-10-14 13:28:51.393157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:59.949 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:59.949 [2024-10-14 13:28:51.597184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:05.216 [2024-10-14 13:28:56.671554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:05.216 Initializing NVMe Controllers 00:18:05.216 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:05.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:05.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:05.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:05.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:05.216 Initialization complete. Launching workers. 00:18:05.216 Starting thread on core 2 00:18:05.216 Starting thread on core 3 00:18:05.216 Starting thread on core 1 00:18:05.216 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:05.216 [2024-10-14 13:28:56.975243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.500 [2024-10-14 13:29:00.289431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.500 Initializing NVMe Controllers 00:18:08.500 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.500 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.500 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:08.500 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:08.500 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:08.500 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:08.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:08.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:08.500 Initialization complete. Launching workers. 00:18:08.500 Starting thread on core 1 with urgent priority queue 00:18:08.500 Starting thread on core 2 with urgent priority queue 00:18:08.500 Starting thread on core 3 with urgent priority queue 00:18:08.500 Starting thread on core 0 with urgent priority queue 00:18:08.500 SPDK bdev Controller (SPDK1 ) core 0: 2811.33 IO/s 35.57 secs/100000 ios 00:18:08.500 SPDK bdev Controller (SPDK1 ) core 1: 2939.00 IO/s 34.03 secs/100000 ios 00:18:08.500 SPDK bdev Controller (SPDK1 ) core 2: 3041.67 IO/s 32.88 secs/100000 ios 00:18:08.500 SPDK bdev Controller (SPDK1 ) core 3: 2929.67 IO/s 34.13 secs/100000 ios 00:18:08.500 ======================================================== 00:18:08.500 00:18:08.500 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:08.758 [2024-10-14 13:29:00.588659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.017 Initializing NVMe Controllers 00:18:09.017 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.017 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.017 Namespace ID: 1 size: 0GB 00:18:09.017 Initialization complete. 00:18:09.017 INFO: using host memory buffer for IO 00:18:09.017 Hello world! 00:18:09.017 [2024-10-14 13:29:00.626414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.017 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:09.277 [2024-10-14 13:29:00.911615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.214 Initializing NVMe Controllers 00:18:10.214 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.214 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:10.214 Initialization complete. Launching workers. 00:18:10.214 submit (in ns) avg, min, max = 6788.4, 3530.0, 4002154.4 00:18:10.214 complete (in ns) avg, min, max = 27834.9, 2058.9, 4016508.9 00:18:10.214 00:18:10.214 Submit histogram 00:18:10.214 ================ 00:18:10.214 Range in us Cumulative Count 00:18:10.214 3.508 - 3.532: 0.0078% ( 1) 00:18:10.214 3.532 - 3.556: 0.1318% ( 16) 00:18:10.214 3.556 - 3.579: 0.5350% ( 52) 00:18:10.214 3.579 - 3.603: 2.1712% ( 211) 00:18:10.214 3.603 - 3.627: 4.8077% ( 340) 00:18:10.214 3.627 - 3.650: 10.8328% ( 777) 00:18:10.214 3.650 - 3.674: 18.5406% ( 994) 00:18:10.214 3.674 - 3.698: 27.7373% ( 1186) 00:18:10.214 3.698 - 3.721: 37.1200% ( 1210) 00:18:10.214 3.721 - 3.745: 45.5180% ( 1083) 00:18:10.214 3.745 - 3.769: 51.8766% ( 820) 00:18:10.214 3.769 - 3.793: 57.4984% ( 725) 00:18:10.214 3.793 - 3.816: 61.5307% ( 520) 00:18:10.214 3.816 - 3.840: 65.1675% ( 469) 00:18:10.214 3.840 - 3.864: 68.6802% ( 453) 00:18:10.214 3.864 - 3.887: 72.2239% ( 457) 00:18:10.214 3.887 - 3.911: 75.8375% ( 466) 00:18:10.214 3.911 - 3.935: 80.0093% ( 538) 00:18:10.214 3.935 - 3.959: 83.3980% ( 437) 00:18:10.214 3.959 - 3.982: 86.1663% ( 357) 00:18:10.214 3.982 - 4.006: 88.4926% ( 300) 00:18:10.214 4.006 - 4.030: 90.1055% ( 208) 00:18:10.214 4.030 - 4.053: 91.6563% ( 200) 00:18:10.214 4.053 - 4.077: 92.9513% ( 167) 00:18:10.214 4.077 - 4.101: 93.8586% ( 117) 00:18:10.214 4.101 - 4.124: 94.6417% ( 101) 00:18:10.214 4.124 - 4.148: 95.3862% ( 96) 00:18:10.214 4.148 - 4.172: 95.7971% ( 53) 00:18:10.214 4.172 - 4.196: 96.1771% ( 49) 00:18:10.214 4.196 - 4.219: 96.4097% ( 30) 00:18:10.214 4.219 - 4.243: 96.5261% ( 15) 00:18:10.214 4.243 - 4.267: 96.6889% ( 21) 00:18:10.214 4.267 - 4.290: 96.7587% ( 9) 00:18:10.214 4.290 - 4.314: 96.8207% ( 8) 00:18:10.214 4.314 - 4.338: 96.9060% ( 11) 00:18:10.214 4.338 - 4.361: 96.9991% ( 12) 00:18:10.214 4.361 - 4.385: 97.0921% ( 12) 00:18:10.214 4.385 - 4.409: 97.1852% ( 12) 00:18:10.214 4.409 - 4.433: 97.2007% ( 2) 00:18:10.214 4.433 - 4.456: 97.2472% ( 6) 00:18:10.214 4.456 - 4.480: 97.3015% ( 7) 00:18:10.214 4.480 - 4.504: 97.3403% ( 5) 00:18:10.214 4.504 - 4.527: 97.3868% ( 6) 00:18:10.214 4.575 - 4.599: 97.4023% ( 2) 00:18:10.214 4.599 - 4.622: 97.4100% ( 1) 00:18:10.214 4.622 - 4.646: 97.4256% ( 2) 00:18:10.215 4.670 - 4.693: 97.4333% ( 1) 00:18:10.215 4.693 - 4.717: 97.4876% ( 7) 00:18:10.215 4.717 - 4.741: 97.5574% ( 9) 00:18:10.215 4.741 - 4.764: 97.5884% ( 4) 00:18:10.215 4.764 - 4.788: 97.6349% ( 6) 00:18:10.215 4.788 - 4.812: 97.6892% ( 7) 00:18:10.215 4.812 - 4.836: 97.7357% ( 6) 00:18:10.215 4.836 - 4.859: 97.7823% ( 6) 00:18:10.215 4.859 - 4.883: 97.8133% ( 4) 00:18:10.215 4.883 - 4.907: 97.8753% ( 8) 00:18:10.215 4.907 - 4.930: 97.8986% ( 3) 00:18:10.215 4.930 - 4.954: 97.9296% ( 4) 00:18:10.215 4.954 - 4.978: 97.9451% ( 2) 00:18:10.215 4.978 - 5.001: 97.9839% ( 5) 00:18:10.215 5.001 - 5.025: 98.0071% ( 3) 00:18:10.215 5.025 - 5.049: 98.0226% ( 2) 00:18:10.215 5.049 - 5.073: 98.0537% ( 4) 00:18:10.215 5.073 - 5.096: 98.0847% ( 4) 00:18:10.215 5.096 - 5.120: 98.0924% ( 1) 00:18:10.215 5.120 - 5.144: 98.1002% ( 1) 00:18:10.215 5.144 - 5.167: 98.1312% ( 4) 00:18:10.215 5.167 - 5.191: 98.1390% ( 1) 00:18:10.215 5.191 - 5.215: 98.1545% ( 2) 00:18:10.215 5.215 - 5.239: 98.1622% ( 1) 00:18:10.215 5.239 - 5.262: 98.1700% ( 1) 00:18:10.215 5.262 - 5.286: 98.1855% ( 2) 00:18:10.215 5.333 - 5.357: 98.2010% ( 2) 00:18:10.215 5.404 - 5.428: 98.2165% ( 2) 00:18:10.215 5.594 - 5.618: 98.2243% ( 1) 00:18:10.215 5.618 - 5.641: 98.2320% ( 1) 00:18:10.215 5.784 - 5.807: 98.2398% ( 1) 00:18:10.215 5.879 - 5.902: 98.2475% ( 1) 00:18:10.215 6.305 - 6.353: 98.2553% ( 1) 00:18:10.215 6.590 - 6.637: 98.2630% ( 1) 00:18:10.215 6.684 - 6.732: 98.2708% ( 1) 00:18:10.215 6.969 - 7.016: 98.2785% ( 1) 00:18:10.215 7.016 - 7.064: 98.2863% ( 1) 00:18:10.215 7.064 - 7.111: 98.2940% ( 1) 00:18:10.215 7.159 - 7.206: 98.3018% ( 1) 00:18:10.215 7.206 - 7.253: 98.3251% ( 3) 00:18:10.215 7.253 - 7.301: 98.3406% ( 2) 00:18:10.215 7.301 - 7.348: 98.3483% ( 1) 00:18:10.215 7.348 - 7.396: 98.3561% ( 1) 00:18:10.215 7.396 - 7.443: 98.3716% ( 2) 00:18:10.215 7.443 - 7.490: 98.3871% ( 2) 00:18:10.215 7.490 - 7.538: 98.4026% ( 2) 00:18:10.215 7.585 - 7.633: 98.4259% ( 3) 00:18:10.215 7.633 - 7.680: 98.4336% ( 1) 00:18:10.215 7.680 - 7.727: 98.4491% ( 2) 00:18:10.215 7.870 - 7.917: 98.4569% ( 1) 00:18:10.215 7.964 - 8.012: 98.4801% ( 3) 00:18:10.215 8.012 - 8.059: 98.4879% ( 1) 00:18:10.215 8.107 - 8.154: 98.5034% ( 2) 00:18:10.215 8.344 - 8.391: 98.5112% ( 1) 00:18:10.215 8.391 - 8.439: 98.5189% ( 1) 00:18:10.215 8.439 - 8.486: 98.5267% ( 1) 00:18:10.215 8.533 - 8.581: 98.5344% ( 1) 00:18:10.215 8.723 - 8.770: 98.5422% ( 1) 00:18:10.215 8.818 - 8.865: 98.5499% ( 1) 00:18:10.215 8.913 - 8.960: 98.5577% ( 1) 00:18:10.215 9.007 - 9.055: 98.5654% ( 1) 00:18:10.215 9.102 - 9.150: 98.5732% ( 1) 00:18:10.215 9.197 - 9.244: 98.5810% ( 1) 00:18:10.215 9.387 - 9.434: 98.5887% ( 1) 00:18:10.215 9.434 - 9.481: 98.5965% ( 1) 00:18:10.215 9.481 - 9.529: 98.6042% ( 1) 00:18:10.215 9.576 - 9.624: 98.6120% ( 1) 00:18:10.215 9.624 - 9.671: 98.6275% ( 2) 00:18:10.215 9.671 - 9.719: 98.6352% ( 1) 00:18:10.215 9.766 - 9.813: 98.6430% ( 1) 00:18:10.215 9.813 - 9.861: 98.6507% ( 1) 00:18:10.215 10.003 - 10.050: 98.6585% ( 1) 00:18:10.215 10.050 - 10.098: 98.6663% ( 1) 00:18:10.215 10.193 - 10.240: 98.6740% ( 1) 00:18:10.215 10.619 - 10.667: 98.6818% ( 1) 00:18:10.215 10.714 - 10.761: 98.6895% ( 1) 00:18:10.215 11.046 - 11.093: 98.6973% ( 1) 00:18:10.215 11.378 - 11.425: 98.7050% ( 1) 00:18:10.215 11.473 - 11.520: 98.7128% ( 1) 00:18:10.215 11.662 - 11.710: 98.7205% ( 1) 00:18:10.215 11.804 - 11.852: 98.7283% ( 1) 00:18:10.215 11.899 - 11.947: 98.7360% ( 1) 00:18:10.215 11.994 - 12.041: 98.7516% ( 2) 00:18:10.215 12.326 - 12.421: 98.7593% ( 1) 00:18:10.215 12.421 - 12.516: 98.7748% ( 2) 00:18:10.215 12.516 - 12.610: 98.7826% ( 1) 00:18:10.215 12.705 - 12.800: 98.7903% ( 1) 00:18:10.215 13.084 - 13.179: 98.8058% ( 2) 00:18:10.215 13.179 - 13.274: 98.8136% ( 1) 00:18:10.215 13.274 - 13.369: 98.8213% ( 1) 00:18:10.215 13.464 - 13.559: 98.8291% ( 1) 00:18:10.215 13.559 - 13.653: 98.8368% ( 1) 00:18:10.215 13.938 - 14.033: 98.8524% ( 2) 00:18:10.215 14.127 - 14.222: 98.8601% ( 1) 00:18:10.215 14.601 - 14.696: 98.8679% ( 1) 00:18:10.215 14.791 - 14.886: 98.8756% ( 1) 00:18:10.215 16.213 - 16.308: 98.8834% ( 1) 00:18:10.215 16.593 - 16.687: 98.8911% ( 1) 00:18:10.215 17.161 - 17.256: 98.9066% ( 2) 00:18:10.215 17.256 - 17.351: 98.9221% ( 2) 00:18:10.215 17.351 - 17.446: 98.9377% ( 2) 00:18:10.215 17.446 - 17.541: 98.9687% ( 4) 00:18:10.215 17.541 - 17.636: 98.9842% ( 2) 00:18:10.215 17.636 - 17.730: 99.0385% ( 7) 00:18:10.215 17.730 - 17.825: 99.0927% ( 7) 00:18:10.215 17.825 - 17.920: 99.1238% ( 4) 00:18:10.215 17.920 - 18.015: 99.2556% ( 17) 00:18:10.215 18.015 - 18.110: 99.3254% ( 9) 00:18:10.215 18.110 - 18.204: 99.3874% ( 8) 00:18:10.215 18.204 - 18.299: 99.4184% ( 4) 00:18:10.215 18.299 - 18.394: 99.4805% ( 8) 00:18:10.215 18.394 - 18.489: 99.5425% ( 8) 00:18:10.215 18.489 - 18.584: 99.6278% ( 11) 00:18:10.215 18.584 - 18.679: 99.6821% ( 7) 00:18:10.215 18.679 - 18.773: 99.7131% ( 4) 00:18:10.215 18.773 - 18.868: 99.7364% ( 3) 00:18:10.215 18.868 - 18.963: 99.7596% ( 3) 00:18:10.215 18.963 - 19.058: 99.7829% ( 3) 00:18:10.215 19.058 - 19.153: 99.7906% ( 1) 00:18:10.215 19.153 - 19.247: 99.8139% ( 3) 00:18:10.215 19.247 - 19.342: 99.8372% ( 3) 00:18:10.215 19.342 - 19.437: 99.8527% ( 2) 00:18:10.215 19.437 - 19.532: 99.8682% ( 2) 00:18:10.215 19.627 - 19.721: 99.8759% ( 1) 00:18:10.215 19.816 - 19.911: 99.8914% ( 2) 00:18:10.215 21.713 - 21.807: 99.8992% ( 1) 00:18:10.215 22.092 - 22.187: 99.9069% ( 1) 00:18:10.215 22.471 - 22.566: 99.9147% ( 1) 00:18:10.215 24.652 - 24.841: 99.9225% ( 1) 00:18:10.215 25.031 - 25.221: 99.9302% ( 1) 00:18:10.215 3980.705 - 4004.978: 100.0000% ( 9) 00:18:10.215 00:18:10.215 Complete histogram 00:18:10.215 ================== 00:18:10.215 Range in us Cumulative Count 00:18:10.215 2.050 - 2.062: 0.3567% ( 46) 00:18:10.215 2.062 - 2.074: 31.4051% ( 4004) 00:18:10.215 2.074 - 2.086: 50.8995% ( 2514) 00:18:10.215 2.086 - 2.098: 52.7683% ( 241) 00:18:10.215 2.098 - 2.110: 57.8474% ( 655) 00:18:10.215 2.110 - 2.121: 59.5301% ( 217) 00:18:10.215 2.121 - 2.133: 63.0040% ( 448) 00:18:10.215 2.133 - 2.145: 77.4736% ( 1866) 00:18:10.215 2.145 - 2.157: 82.2038% ( 610) 00:18:10.215 2.157 - 2.169: 83.4445% ( 160) 00:18:10.215 2.169 - 2.181: 85.6622% ( 286) 00:18:10.215 2.181 - 2.193: 86.4919% ( 107) 00:18:10.215 2.193 - 2.204: 87.5853% ( 141) 00:18:10.215 2.204 - 2.216: 89.9891% ( 310) 00:18:10.215 2.216 - 2.228: 92.0440% ( 265) 00:18:10.215 2.228 - 2.240: 93.6414% ( 206) 00:18:10.215 2.240 - 2.252: 94.2540% ( 79) 00:18:10.215 2.252 - 2.264: 94.4556% ( 26) 00:18:10.215 2.264 - 2.276: 94.6185% ( 21) 00:18:10.215 2.276 - 2.287: 94.9054% ( 37) 00:18:10.215 2.287 - 2.299: 95.3474% ( 57) 00:18:10.215 2.299 - 2.311: 95.7118% ( 47) 00:18:10.215 2.311 - 2.323: 95.8824% ( 22) 00:18:10.215 2.323 - 2.335: 95.9057% ( 3) 00:18:10.215 2.335 - 2.347: 95.9445% ( 5) 00:18:10.215 2.347 - 2.359: 96.0143% ( 9) 00:18:10.215 2.359 - 2.370: 96.1616% ( 19) 00:18:10.215 2.370 - 2.382: 96.4175% ( 33) 00:18:10.215 2.382 - 2.394: 96.8362% ( 54) 00:18:10.215 2.394 - 2.406: 97.0766% ( 31) 00:18:10.215 2.406 - 2.418: 97.2550% ( 23) 00:18:10.215 2.418 - 2.430: 97.4798% ( 29) 00:18:10.215 2.430 - 2.441: 97.7512% ( 35) 00:18:10.215 2.441 - 2.453: 97.9063% ( 20) 00:18:10.215 2.453 - 2.465: 98.0614% ( 20) 00:18:10.215 2.465 - 2.477: 98.1545% ( 12) 00:18:10.215 2.477 - 2.489: 98.2553% ( 13) 00:18:10.215 2.489 - 2.501: 98.3096% ( 7) 00:18:10.215 2.501 - 2.513: 98.3483% ( 5) 00:18:10.215 2.513 - 2.524: 98.3871% ( 5) 00:18:10.215 2.524 - 2.536: 98.3949% ( 1) 00:18:10.215 2.536 - 2.548: 98.4104% ( 2) 00:18:10.215 2.548 - 2.560: 98.4336% ( 3) 00:18:10.215 2.560 - 2.572: 98.4491% ( 2) 00:18:10.215 2.572 - 2.584: 98.4569% ( 1) 00:18:10.215 2.607 - 2.619: 98.4646% ( 1) 00:18:10.215 2.631 - 2.643: 98.4724% ( 1) 00:18:10.215 2.750 - 2.761: 98.4801% ( 1) 00:18:10.215 2.773 - 2.785: 98.4957% ( 2) 00:18:10.215 2.785 - 2.797: 98.5034% ( 1) 00:18:10.215 2.809 - 2.821: 98.5189% ( 2) 00:18:10.215 2.844 - 2.856: 98.5267% ( 1) 00:18:10.215 3.129 - 3.153: 98.5344% ( 1) 00:18:10.215 3.176 - 3.200: 98.5422% ( 1) 00:18:10.215 3.271 - 3.295: 98.5577% ( 2) 00:18:10.215 3.319 - 3.342: 98.5732% ( 2) 00:18:10.215 3.366 - 3.390: 98.5965% ( 3) 00:18:10.215 3.390 - 3.413: 98.6120% ( 2) 00:18:10.215 3.413 - 3.437: 98.6430% ( 4) 00:18:10.215 3.437 - 3.461: 98.6507% ( 1) 00:18:10.215 3.461 - 3.484: 98.6585% ( 1) 00:18:10.215 3.532 - 3.556: 98.6663% ( 1) 00:18:10.215 3.556 - 3.579: 98.6740% ( 1) 00:18:10.215 3.674 - 3.698: 98.6818% ( 1) 00:18:10.215 3.698 - 3.721: 98.6895% ( 1) 00:18:10.216 3.769 - 3.793: 98.7128% ( 3) 00:18:10.216 3.816 - 3.840: 98.7283% ( 2) 00:18:10.216 3.887 - 3.911: 98.7360% ( 1) 00:18:10.216 4.267 - 4.290: 98.7438% ( 1) 00:18:10.216 5.215 - 5.239: 98.7516% ( 1) 00:18:10.216 5.239 - 5.262: 98.7593% ( 1) 00:18:10.216 5.713 - 5.736: 98.7671% ( 1) 00:18:10.216 6.116 - 6.163: 98.7748% ( 1) 00:18:10.216 6.210 - 6.258: 98.7826% ( 1) 00:18:10.216 6.305 - 6.353: 9[2024-10-14 13:29:01.937488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:10.216 8.7903% ( 1) 00:18:10.216 8.296 - 8.344: 98.7981% ( 1) 00:18:10.216 8.344 - 8.391: 98.8058% ( 1) 00:18:10.216 11.899 - 11.947: 98.8136% ( 1) 00:18:10.216 15.360 - 15.455: 98.8291% ( 2) 00:18:10.216 15.455 - 15.550: 98.8368% ( 1) 00:18:10.216 15.644 - 15.739: 98.8679% ( 4) 00:18:10.216 15.929 - 16.024: 98.8989% ( 4) 00:18:10.216 16.024 - 16.119: 98.9377% ( 5) 00:18:10.216 16.119 - 16.213: 98.9532% ( 2) 00:18:10.216 16.213 - 16.308: 98.9919% ( 5) 00:18:10.216 16.308 - 16.403: 99.0230% ( 4) 00:18:10.216 16.403 - 16.498: 99.0695% ( 6) 00:18:10.216 16.498 - 16.593: 99.1160% ( 6) 00:18:10.216 16.593 - 16.687: 99.1548% ( 5) 00:18:10.216 16.687 - 16.782: 99.1858% ( 4) 00:18:10.216 16.782 - 16.877: 99.2478% ( 8) 00:18:10.216 16.877 - 16.972: 99.2711% ( 3) 00:18:10.216 16.972 - 17.067: 99.2788% ( 1) 00:18:10.216 17.161 - 17.256: 99.3021% ( 3) 00:18:10.216 17.351 - 17.446: 99.3099% ( 1) 00:18:10.216 17.446 - 17.541: 99.3254% ( 2) 00:18:10.216 17.920 - 18.015: 99.3331% ( 1) 00:18:10.216 18.015 - 18.110: 99.3409% ( 1) 00:18:10.216 18.584 - 18.679: 99.3486% ( 1) 00:18:10.216 18.679 - 18.773: 99.3564% ( 1) 00:18:10.216 2621.440 - 2633.576: 99.3641% ( 1) 00:18:10.216 3980.705 - 4004.978: 99.9147% ( 71) 00:18:10.216 4004.978 - 4029.250: 100.0000% ( 11) 00:18:10.216 00:18:10.216 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:10.216 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:10.216 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:10.216 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:10.216 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:10.474 [ 00:18:10.474 { 00:18:10.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:10.474 "subtype": "Discovery", 00:18:10.474 "listen_addresses": [], 00:18:10.474 "allow_any_host": true, 00:18:10.474 "hosts": [] 00:18:10.474 }, 00:18:10.474 { 00:18:10.474 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:10.474 "subtype": "NVMe", 00:18:10.474 "listen_addresses": [ 00:18:10.474 { 00:18:10.474 "trtype": "VFIOUSER", 00:18:10.474 "adrfam": "IPv4", 00:18:10.474 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:10.474 "trsvcid": "0" 00:18:10.474 } 00:18:10.474 ], 00:18:10.474 "allow_any_host": true, 00:18:10.474 "hosts": [], 00:18:10.474 "serial_number": "SPDK1", 00:18:10.474 "model_number": "SPDK bdev Controller", 00:18:10.474 "max_namespaces": 32, 00:18:10.474 "min_cntlid": 1, 00:18:10.474 "max_cntlid": 65519, 00:18:10.474 "namespaces": [ 00:18:10.474 { 00:18:10.474 "nsid": 1, 00:18:10.474 "bdev_name": "Malloc1", 00:18:10.474 "name": "Malloc1", 00:18:10.474 "nguid": "00CEC097A9BE4E1999A5A017693737F3", 00:18:10.474 "uuid": "00cec097-a9be-4e19-99a5-a017693737f3" 00:18:10.474 } 00:18:10.474 ] 00:18:10.474 }, 00:18:10.474 { 00:18:10.474 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:10.474 "subtype": "NVMe", 00:18:10.474 "listen_addresses": [ 00:18:10.474 { 00:18:10.474 "trtype": "VFIOUSER", 00:18:10.474 "adrfam": "IPv4", 00:18:10.474 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:10.474 "trsvcid": "0" 00:18:10.474 } 00:18:10.474 ], 00:18:10.474 "allow_any_host": true, 00:18:10.474 "hosts": [], 00:18:10.474 "serial_number": "SPDK2", 00:18:10.474 "model_number": "SPDK bdev Controller", 00:18:10.474 "max_namespaces": 32, 00:18:10.474 "min_cntlid": 1, 00:18:10.474 "max_cntlid": 65519, 00:18:10.474 "namespaces": [ 00:18:10.474 { 00:18:10.474 "nsid": 1, 00:18:10.474 "bdev_name": "Malloc2", 00:18:10.474 "name": "Malloc2", 00:18:10.474 "nguid": "F69EFE3B4D764412B2EE1641DC999D19", 00:18:10.474 "uuid": "f69efe3b-4d76-4412-b2ee-1641dc999d19" 00:18:10.474 } 00:18:10.474 ] 00:18:10.474 } 00:18:10.474 ] 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=230432 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:10.474 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:10.733 [2024-10-14 13:29:02.425902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:10.733 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:10.992 Malloc3 00:18:10.992 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:11.251 [2024-10-14 13:29:03.034399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:11.251 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:11.251 Asynchronous Event Request test 00:18:11.251 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:11.251 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:11.251 Registering asynchronous event callbacks... 00:18:11.251 Starting namespace attribute notice tests for all controllers... 00:18:11.251 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:11.251 aer_cb - Changed Namespace 00:18:11.251 Cleaning up... 00:18:11.509 [ 00:18:11.509 { 00:18:11.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:11.509 "subtype": "Discovery", 00:18:11.509 "listen_addresses": [], 00:18:11.509 "allow_any_host": true, 00:18:11.509 "hosts": [] 00:18:11.509 }, 00:18:11.509 { 00:18:11.509 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:11.509 "subtype": "NVMe", 00:18:11.509 "listen_addresses": [ 00:18:11.509 { 00:18:11.509 "trtype": "VFIOUSER", 00:18:11.510 "adrfam": "IPv4", 00:18:11.510 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:11.510 "trsvcid": "0" 00:18:11.510 } 00:18:11.510 ], 00:18:11.510 "allow_any_host": true, 00:18:11.510 "hosts": [], 00:18:11.510 "serial_number": "SPDK1", 00:18:11.510 "model_number": "SPDK bdev Controller", 00:18:11.510 "max_namespaces": 32, 00:18:11.510 "min_cntlid": 1, 00:18:11.510 "max_cntlid": 65519, 00:18:11.510 "namespaces": [ 00:18:11.510 { 00:18:11.510 "nsid": 1, 00:18:11.510 "bdev_name": "Malloc1", 00:18:11.510 "name": "Malloc1", 00:18:11.510 "nguid": "00CEC097A9BE4E1999A5A017693737F3", 00:18:11.510 "uuid": "00cec097-a9be-4e19-99a5-a017693737f3" 00:18:11.510 }, 00:18:11.510 { 00:18:11.510 "nsid": 2, 00:18:11.510 "bdev_name": "Malloc3", 00:18:11.510 "name": "Malloc3", 00:18:11.510 "nguid": "0E8FBD62CCE64E2785060CB2C3EB4939", 00:18:11.510 "uuid": "0e8fbd62-cce6-4e27-8506-0cb2c3eb4939" 00:18:11.510 } 00:18:11.510 ] 00:18:11.510 }, 00:18:11.510 { 00:18:11.510 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:11.510 "subtype": "NVMe", 00:18:11.510 "listen_addresses": [ 00:18:11.510 { 00:18:11.510 "trtype": "VFIOUSER", 00:18:11.510 "adrfam": "IPv4", 00:18:11.510 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:11.510 "trsvcid": "0" 00:18:11.510 } 00:18:11.510 ], 00:18:11.510 "allow_any_host": true, 00:18:11.510 "hosts": [], 00:18:11.510 "serial_number": "SPDK2", 00:18:11.510 "model_number": "SPDK bdev Controller", 00:18:11.510 "max_namespaces": 32, 00:18:11.510 "min_cntlid": 1, 00:18:11.510 "max_cntlid": 65519, 00:18:11.510 "namespaces": [ 00:18:11.510 { 00:18:11.510 "nsid": 1, 00:18:11.510 "bdev_name": "Malloc2", 00:18:11.510 "name": "Malloc2", 00:18:11.510 "nguid": "F69EFE3B4D764412B2EE1641DC999D19", 00:18:11.510 "uuid": "f69efe3b-4d76-4412-b2ee-1641dc999d19" 00:18:11.510 } 00:18:11.510 ] 00:18:11.510 } 00:18:11.510 ] 00:18:11.510 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 230432 00:18:11.510 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:11.510 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:11.510 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:11.510 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:11.510 [2024-10-14 13:29:03.354746] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:18:11.510 [2024-10-14 13:29:03.354784] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230566 ] 00:18:11.770 [2024-10-14 13:29:03.389029] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:11.770 [2024-10-14 13:29:03.393395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:11.770 [2024-10-14 13:29:03.393450] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f70eec68000 00:18:11.770 [2024-10-14 13:29:03.394393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.395400] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.396406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.397409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.398422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.399443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.400445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.401457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:11.770 [2024-10-14 13:29:03.402456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:11.770 [2024-10-14 13:29:03.402494] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f70ed960000 00:18:11.770 [2024-10-14 13:29:03.403648] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:11.770 [2024-10-14 13:29:03.418788] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:11.770 [2024-10-14 13:29:03.418827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:11.770 [2024-10-14 13:29:03.423947] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:11.770 [2024-10-14 13:29:03.424002] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:11.770 [2024-10-14 13:29:03.424094] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:11.770 [2024-10-14 13:29:03.424142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:11.770 [2024-10-14 13:29:03.424156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:11.770 [2024-10-14 13:29:03.424956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:11.770 [2024-10-14 13:29:03.424977] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:11.770 [2024-10-14 13:29:03.424989] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:11.770 [2024-10-14 13:29:03.425957] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:11.770 [2024-10-14 13:29:03.425977] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:11.770 [2024-10-14 13:29:03.425991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.426967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:11.770 [2024-10-14 13:29:03.426987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.427974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:11.770 [2024-10-14 13:29:03.427994] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:11.770 [2024-10-14 13:29:03.428003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.428014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.428126] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:11.770 [2024-10-14 13:29:03.428142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.428151] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:11.770 [2024-10-14 13:29:03.428983] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:11.770 [2024-10-14 13:29:03.429992] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:11.770 [2024-10-14 13:29:03.431002] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:11.770 [2024-10-14 13:29:03.431997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:11.770 [2024-10-14 13:29:03.432076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:11.770 [2024-10-14 13:29:03.433015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:11.770 [2024-10-14 13:29:03.433034] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:11.770 [2024-10-14 13:29:03.433044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.433066] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:11.770 [2024-10-14 13:29:03.433080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.433119] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.770 [2024-10-14 13:29:03.433136] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.770 [2024-10-14 13:29:03.433147] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.770 [2024-10-14 13:29:03.433168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.770 [2024-10-14 13:29:03.441161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:11.770 [2024-10-14 13:29:03.441185] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:11.770 [2024-10-14 13:29:03.441196] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:11.770 [2024-10-14 13:29:03.441203] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:11.770 [2024-10-14 13:29:03.441210] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:11.770 [2024-10-14 13:29:03.441218] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:11.770 [2024-10-14 13:29:03.441226] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:11.770 [2024-10-14 13:29:03.441234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.441246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.441263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:11.770 [2024-10-14 13:29:03.449137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:11.770 [2024-10-14 13:29:03.449161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.770 [2024-10-14 13:29:03.449175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.770 [2024-10-14 13:29:03.449187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.770 [2024-10-14 13:29:03.449199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.770 [2024-10-14 13:29:03.449207] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.449224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.449239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:11.770 [2024-10-14 13:29:03.457138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:11.770 [2024-10-14 13:29:03.457156] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:11.770 [2024-10-14 13:29:03.457166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.457178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.457192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.457212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:11.770 [2024-10-14 13:29:03.465155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:11.770 [2024-10-14 13:29:03.465230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.465246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:11.770 [2024-10-14 13:29:03.465260] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:11.770 [2024-10-14 13:29:03.465269] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:11.770 [2024-10-14 13:29:03.465275] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.770 [2024-10-14 13:29:03.465285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.473153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.473176] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:11.771 [2024-10-14 13:29:03.473196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.473212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.473225] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.771 [2024-10-14 13:29:03.473233] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.771 [2024-10-14 13:29:03.473240] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.771 [2024-10-14 13:29:03.473250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.481154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.481182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.481198] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.481212] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:11.771 [2024-10-14 13:29:03.481221] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.771 [2024-10-14 13:29:03.481227] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.771 [2024-10-14 13:29:03.481236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.489151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.489173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489247] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:11.771 [2024-10-14 13:29:03.489254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:11.771 [2024-10-14 13:29:03.489263] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:11.771 [2024-10-14 13:29:03.489289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.497141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.497167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.505137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.505163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.513140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.513165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.521140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.521171] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:11.771 [2024-10-14 13:29:03.521183] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:11.771 [2024-10-14 13:29:03.521189] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:11.771 [2024-10-14 13:29:03.521196] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:11.771 [2024-10-14 13:29:03.521201] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:11.771 [2024-10-14 13:29:03.521211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:11.771 [2024-10-14 13:29:03.521223] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:11.771 [2024-10-14 13:29:03.521231] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:11.771 [2024-10-14 13:29:03.521237] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.771 [2024-10-14 13:29:03.521246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.521258] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:11.771 [2024-10-14 13:29:03.521266] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:11.771 [2024-10-14 13:29:03.521272] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.771 [2024-10-14 13:29:03.521284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.521298] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:11.771 [2024-10-14 13:29:03.521306] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:11.771 [2024-10-14 13:29:03.521312] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:11.771 [2024-10-14 13:29:03.521321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:11.771 [2024-10-14 13:29:03.529142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.529169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.529187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:11.771 [2024-10-14 13:29:03.529199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:11.771 ===================================================== 00:18:11.771 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:11.771 ===================================================== 00:18:11.771 Controller Capabilities/Features 00:18:11.771 ================================ 00:18:11.771 Vendor ID: 4e58 00:18:11.771 Subsystem Vendor ID: 4e58 00:18:11.771 Serial Number: SPDK2 00:18:11.771 Model Number: SPDK bdev Controller 00:18:11.771 Firmware Version: 25.01 00:18:11.771 Recommended Arb Burst: 6 00:18:11.771 IEEE OUI Identifier: 8d 6b 50 00:18:11.771 Multi-path I/O 00:18:11.771 May have multiple subsystem ports: Yes 00:18:11.771 May have multiple controllers: Yes 00:18:11.771 Associated with SR-IOV VF: No 00:18:11.771 Max Data Transfer Size: 131072 00:18:11.771 Max Number of Namespaces: 32 00:18:11.771 Max Number of I/O Queues: 127 00:18:11.771 NVMe Specification Version (VS): 1.3 00:18:11.771 NVMe Specification Version (Identify): 1.3 00:18:11.771 Maximum Queue Entries: 256 00:18:11.771 Contiguous Queues Required: Yes 00:18:11.771 Arbitration Mechanisms Supported 00:18:11.771 Weighted Round Robin: Not Supported 00:18:11.771 Vendor Specific: Not Supported 00:18:11.771 Reset Timeout: 15000 ms 00:18:11.771 Doorbell Stride: 4 bytes 00:18:11.771 NVM Subsystem Reset: Not Supported 00:18:11.771 Command Sets Supported 00:18:11.771 NVM Command Set: Supported 00:18:11.771 Boot Partition: Not Supported 00:18:11.771 Memory Page Size Minimum: 4096 bytes 00:18:11.771 Memory Page Size Maximum: 4096 bytes 00:18:11.771 Persistent Memory Region: Not Supported 00:18:11.771 Optional Asynchronous Events Supported 00:18:11.771 Namespace Attribute Notices: Supported 00:18:11.771 Firmware Activation Notices: Not Supported 00:18:11.771 ANA Change Notices: Not Supported 00:18:11.771 PLE Aggregate Log Change Notices: Not Supported 00:18:11.771 LBA Status Info Alert Notices: Not Supported 00:18:11.771 EGE Aggregate Log Change Notices: Not Supported 00:18:11.771 Normal NVM Subsystem Shutdown event: Not Supported 00:18:11.771 Zone Descriptor Change Notices: Not Supported 00:18:11.771 Discovery Log Change Notices: Not Supported 00:18:11.771 Controller Attributes 00:18:11.771 128-bit Host Identifier: Supported 00:18:11.771 Non-Operational Permissive Mode: Not Supported 00:18:11.771 NVM Sets: Not Supported 00:18:11.771 Read Recovery Levels: Not Supported 00:18:11.771 Endurance Groups: Not Supported 00:18:11.771 Predictable Latency Mode: Not Supported 00:18:11.771 Traffic Based Keep ALive: Not Supported 00:18:11.771 Namespace Granularity: Not Supported 00:18:11.771 SQ Associations: Not Supported 00:18:11.771 UUID List: Not Supported 00:18:11.771 Multi-Domain Subsystem: Not Supported 00:18:11.771 Fixed Capacity Management: Not Supported 00:18:11.771 Variable Capacity Management: Not Supported 00:18:11.771 Delete Endurance Group: Not Supported 00:18:11.771 Delete NVM Set: Not Supported 00:18:11.771 Extended LBA Formats Supported: Not Supported 00:18:11.771 Flexible Data Placement Supported: Not Supported 00:18:11.771 00:18:11.771 Controller Memory Buffer Support 00:18:11.771 ================================ 00:18:11.771 Supported: No 00:18:11.771 00:18:11.771 Persistent Memory Region Support 00:18:11.771 ================================ 00:18:11.771 Supported: No 00:18:11.771 00:18:11.771 Admin Command Set Attributes 00:18:11.771 ============================ 00:18:11.771 Security Send/Receive: Not Supported 00:18:11.771 Format NVM: Not Supported 00:18:11.771 Firmware Activate/Download: Not Supported 00:18:11.771 Namespace Management: Not Supported 00:18:11.771 Device Self-Test: Not Supported 00:18:11.772 Directives: Not Supported 00:18:11.772 NVMe-MI: Not Supported 00:18:11.772 Virtualization Management: Not Supported 00:18:11.772 Doorbell Buffer Config: Not Supported 00:18:11.772 Get LBA Status Capability: Not Supported 00:18:11.772 Command & Feature Lockdown Capability: Not Supported 00:18:11.772 Abort Command Limit: 4 00:18:11.772 Async Event Request Limit: 4 00:18:11.772 Number of Firmware Slots: N/A 00:18:11.772 Firmware Slot 1 Read-Only: N/A 00:18:11.772 Firmware Activation Without Reset: N/A 00:18:11.772 Multiple Update Detection Support: N/A 00:18:11.772 Firmware Update Granularity: No Information Provided 00:18:11.772 Per-Namespace SMART Log: No 00:18:11.772 Asymmetric Namespace Access Log Page: Not Supported 00:18:11.772 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:11.772 Command Effects Log Page: Supported 00:18:11.772 Get Log Page Extended Data: Supported 00:18:11.772 Telemetry Log Pages: Not Supported 00:18:11.772 Persistent Event Log Pages: Not Supported 00:18:11.772 Supported Log Pages Log Page: May Support 00:18:11.772 Commands Supported & Effects Log Page: Not Supported 00:18:11.772 Feature Identifiers & Effects Log Page:May Support 00:18:11.772 NVMe-MI Commands & Effects Log Page: May Support 00:18:11.772 Data Area 4 for Telemetry Log: Not Supported 00:18:11.772 Error Log Page Entries Supported: 128 00:18:11.772 Keep Alive: Supported 00:18:11.772 Keep Alive Granularity: 10000 ms 00:18:11.772 00:18:11.772 NVM Command Set Attributes 00:18:11.772 ========================== 00:18:11.772 Submission Queue Entry Size 00:18:11.772 Max: 64 00:18:11.772 Min: 64 00:18:11.772 Completion Queue Entry Size 00:18:11.772 Max: 16 00:18:11.772 Min: 16 00:18:11.772 Number of Namespaces: 32 00:18:11.772 Compare Command: Supported 00:18:11.772 Write Uncorrectable Command: Not Supported 00:18:11.772 Dataset Management Command: Supported 00:18:11.772 Write Zeroes Command: Supported 00:18:11.772 Set Features Save Field: Not Supported 00:18:11.772 Reservations: Not Supported 00:18:11.772 Timestamp: Not Supported 00:18:11.772 Copy: Supported 00:18:11.772 Volatile Write Cache: Present 00:18:11.772 Atomic Write Unit (Normal): 1 00:18:11.772 Atomic Write Unit (PFail): 1 00:18:11.772 Atomic Compare & Write Unit: 1 00:18:11.772 Fused Compare & Write: Supported 00:18:11.772 Scatter-Gather List 00:18:11.772 SGL Command Set: Supported (Dword aligned) 00:18:11.772 SGL Keyed: Not Supported 00:18:11.772 SGL Bit Bucket Descriptor: Not Supported 00:18:11.772 SGL Metadata Pointer: Not Supported 00:18:11.772 Oversized SGL: Not Supported 00:18:11.772 SGL Metadata Address: Not Supported 00:18:11.772 SGL Offset: Not Supported 00:18:11.772 Transport SGL Data Block: Not Supported 00:18:11.772 Replay Protected Memory Block: Not Supported 00:18:11.772 00:18:11.772 Firmware Slot Information 00:18:11.772 ========================= 00:18:11.772 Active slot: 1 00:18:11.772 Slot 1 Firmware Revision: 25.01 00:18:11.772 00:18:11.772 00:18:11.772 Commands Supported and Effects 00:18:11.772 ============================== 00:18:11.772 Admin Commands 00:18:11.772 -------------- 00:18:11.772 Get Log Page (02h): Supported 00:18:11.772 Identify (06h): Supported 00:18:11.772 Abort (08h): Supported 00:18:11.772 Set Features (09h): Supported 00:18:11.772 Get Features (0Ah): Supported 00:18:11.772 Asynchronous Event Request (0Ch): Supported 00:18:11.772 Keep Alive (18h): Supported 00:18:11.772 I/O Commands 00:18:11.772 ------------ 00:18:11.772 Flush (00h): Supported LBA-Change 00:18:11.772 Write (01h): Supported LBA-Change 00:18:11.772 Read (02h): Supported 00:18:11.772 Compare (05h): Supported 00:18:11.772 Write Zeroes (08h): Supported LBA-Change 00:18:11.772 Dataset Management (09h): Supported LBA-Change 00:18:11.772 Copy (19h): Supported LBA-Change 00:18:11.772 00:18:11.772 Error Log 00:18:11.772 ========= 00:18:11.772 00:18:11.772 Arbitration 00:18:11.772 =========== 00:18:11.772 Arbitration Burst: 1 00:18:11.772 00:18:11.772 Power Management 00:18:11.772 ================ 00:18:11.772 Number of Power States: 1 00:18:11.772 Current Power State: Power State #0 00:18:11.772 Power State #0: 00:18:11.772 Max Power: 0.00 W 00:18:11.772 Non-Operational State: Operational 00:18:11.772 Entry Latency: Not Reported 00:18:11.772 Exit Latency: Not Reported 00:18:11.772 Relative Read Throughput: 0 00:18:11.772 Relative Read Latency: 0 00:18:11.772 Relative Write Throughput: 0 00:18:11.772 Relative Write Latency: 0 00:18:11.772 Idle Power: Not Reported 00:18:11.772 Active Power: Not Reported 00:18:11.772 Non-Operational Permissive Mode: Not Supported 00:18:11.772 00:18:11.772 Health Information 00:18:11.772 ================== 00:18:11.772 Critical Warnings: 00:18:11.772 Available Spare Space: OK 00:18:11.772 Temperature: OK 00:18:11.772 Device Reliability: OK 00:18:11.772 Read Only: No 00:18:11.772 Volatile Memory Backup: OK 00:18:11.772 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:11.772 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:11.772 Available Spare: 0% 00:18:11.772 Available Sp[2024-10-14 13:29:03.529319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:11.772 [2024-10-14 13:29:03.537141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:11.772 [2024-10-14 13:29:03.537196] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:11.772 [2024-10-14 13:29:03.537214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.772 [2024-10-14 13:29:03.537225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.772 [2024-10-14 13:29:03.537235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.772 [2024-10-14 13:29:03.537245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.772 [2024-10-14 13:29:03.537331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:11.772 [2024-10-14 13:29:03.537352] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:11.772 [2024-10-14 13:29:03.538333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:11.772 [2024-10-14 13:29:03.538418] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:11.772 [2024-10-14 13:29:03.538434] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:11.772 [2024-10-14 13:29:03.539350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:11.772 [2024-10-14 13:29:03.539375] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:11.772 [2024-10-14 13:29:03.539452] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:11.772 [2024-10-14 13:29:03.540644] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:11.772 are Threshold: 0% 00:18:11.772 Life Percentage Used: 0% 00:18:11.772 Data Units Read: 0 00:18:11.772 Data Units Written: 0 00:18:11.772 Host Read Commands: 0 00:18:11.772 Host Write Commands: 0 00:18:11.772 Controller Busy Time: 0 minutes 00:18:11.772 Power Cycles: 0 00:18:11.772 Power On Hours: 0 hours 00:18:11.772 Unsafe Shutdowns: 0 00:18:11.772 Unrecoverable Media Errors: 0 00:18:11.772 Lifetime Error Log Entries: 0 00:18:11.772 Warning Temperature Time: 0 minutes 00:18:11.772 Critical Temperature Time: 0 minutes 00:18:11.772 00:18:11.772 Number of Queues 00:18:11.772 ================ 00:18:11.772 Number of I/O Submission Queues: 127 00:18:11.772 Number of I/O Completion Queues: 127 00:18:11.772 00:18:11.772 Active Namespaces 00:18:11.772 ================= 00:18:11.772 Namespace ID:1 00:18:11.772 Error Recovery Timeout: Unlimited 00:18:11.772 Command Set Identifier: NVM (00h) 00:18:11.772 Deallocate: Supported 00:18:11.772 Deallocated/Unwritten Error: Not Supported 00:18:11.772 Deallocated Read Value: Unknown 00:18:11.772 Deallocate in Write Zeroes: Not Supported 00:18:11.772 Deallocated Guard Field: 0xFFFF 00:18:11.772 Flush: Supported 00:18:11.772 Reservation: Supported 00:18:11.772 Namespace Sharing Capabilities: Multiple Controllers 00:18:11.772 Size (in LBAs): 131072 (0GiB) 00:18:11.772 Capacity (in LBAs): 131072 (0GiB) 00:18:11.772 Utilization (in LBAs): 131072 (0GiB) 00:18:11.772 NGUID: F69EFE3B4D764412B2EE1641DC999D19 00:18:11.772 UUID: f69efe3b-4d76-4412-b2ee-1641dc999d19 00:18:11.772 Thin Provisioning: Not Supported 00:18:11.772 Per-NS Atomic Units: Yes 00:18:11.772 Atomic Boundary Size (Normal): 0 00:18:11.772 Atomic Boundary Size (PFail): 0 00:18:11.772 Atomic Boundary Offset: 0 00:18:11.772 Maximum Single Source Range Length: 65535 00:18:11.772 Maximum Copy Length: 65535 00:18:11.772 Maximum Source Range Count: 1 00:18:11.772 NGUID/EUI64 Never Reused: No 00:18:11.772 Namespace Write Protected: No 00:18:11.772 Number of LBA Formats: 1 00:18:11.772 Current LBA Format: LBA Format #00 00:18:11.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:11.772 00:18:11.772 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:12.033 [2024-10-14 13:29:03.759877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:17.317 Initializing NVMe Controllers 00:18:17.317 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:17.317 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:17.317 Initialization complete. Launching workers. 00:18:17.317 ======================================================== 00:18:17.317 Latency(us) 00:18:17.317 Device Information : IOPS MiB/s Average min max 00:18:17.317 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33920.35 132.50 3773.03 1179.93 7869.62 00:18:17.317 ======================================================== 00:18:17.317 Total : 33920.35 132.50 3773.03 1179.93 7869.62 00:18:17.317 00:18:17.317 [2024-10-14 13:29:08.865476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:17.317 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:17.317 [2024-10-14 13:29:09.112185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:22.594 Initializing NVMe Controllers 00:18:22.594 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:22.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:22.594 Initialization complete. Launching workers. 00:18:22.594 ======================================================== 00:18:22.594 Latency(us) 00:18:22.594 Device Information : IOPS MiB/s Average min max 00:18:22.594 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31165.67 121.74 4106.23 1205.47 9800.99 00:18:22.594 ======================================================== 00:18:22.594 Total : 31165.67 121.74 4106.23 1205.47 9800.99 00:18:22.594 00:18:22.594 [2024-10-14 13:29:14.134603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:22.594 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:22.594 [2024-10-14 13:29:14.343515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:27.878 [2024-10-14 13:29:19.486269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:27.878 Initializing NVMe Controllers 00:18:27.878 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:27.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:27.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:27.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:27.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:27.878 Initialization complete. Launching workers. 00:18:27.878 Starting thread on core 2 00:18:27.878 Starting thread on core 3 00:18:27.878 Starting thread on core 1 00:18:27.878 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:28.138 [2024-10-14 13:29:19.787626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.427 [2024-10-14 13:29:22.847673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.427 Initializing NVMe Controllers 00:18:31.427 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.427 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.427 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:31.427 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:31.427 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:31.427 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:31.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:31.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:31.427 Initialization complete. Launching workers. 00:18:31.427 Starting thread on core 1 with urgent priority queue 00:18:31.427 Starting thread on core 2 with urgent priority queue 00:18:31.427 Starting thread on core 3 with urgent priority queue 00:18:31.427 Starting thread on core 0 with urgent priority queue 00:18:31.427 SPDK bdev Controller (SPDK2 ) core 0: 5318.00 IO/s 18.80 secs/100000 ios 00:18:31.427 SPDK bdev Controller (SPDK2 ) core 1: 5901.67 IO/s 16.94 secs/100000 ios 00:18:31.427 SPDK bdev Controller (SPDK2 ) core 2: 5745.00 IO/s 17.41 secs/100000 ios 00:18:31.427 SPDK bdev Controller (SPDK2 ) core 3: 5944.33 IO/s 16.82 secs/100000 ios 00:18:31.427 ======================================================== 00:18:31.427 00:18:31.427 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:31.427 [2024-10-14 13:29:23.148632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.427 Initializing NVMe Controllers 00:18:31.427 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.427 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.427 Namespace ID: 1 size: 0GB 00:18:31.427 Initialization complete. 00:18:31.427 INFO: using host memory buffer for IO 00:18:31.427 Hello world! 00:18:31.427 [2024-10-14 13:29:23.159719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.427 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:31.685 [2024-10-14 13:29:23.457879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.065 Initializing NVMe Controllers 00:18:33.065 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:33.065 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:33.065 Initialization complete. Launching workers. 00:18:33.065 submit (in ns) avg, min, max = 7277.8, 3496.7, 6995334.4 00:18:33.065 complete (in ns) avg, min, max = 26882.8, 2073.3, 4025830.0 00:18:33.065 00:18:33.065 Submit histogram 00:18:33.065 ================ 00:18:33.065 Range in us Cumulative Count 00:18:33.065 3.484 - 3.508: 0.1250% ( 16) 00:18:33.065 3.508 - 3.532: 0.6093% ( 62) 00:18:33.065 3.532 - 3.556: 2.5621% ( 250) 00:18:33.065 3.556 - 3.579: 5.7257% ( 405) 00:18:33.065 3.579 - 3.603: 11.9356% ( 795) 00:18:33.065 3.603 - 3.627: 19.7547% ( 1001) 00:18:33.065 3.627 - 3.650: 29.2376% ( 1214) 00:18:33.065 3.650 - 3.674: 37.1661% ( 1015) 00:18:33.065 3.674 - 3.698: 44.7821% ( 975) 00:18:33.065 3.698 - 3.721: 51.5623% ( 868) 00:18:33.065 3.721 - 3.745: 57.2254% ( 725) 00:18:33.065 3.745 - 3.769: 61.6388% ( 565) 00:18:33.065 3.769 - 3.793: 65.2398% ( 461) 00:18:33.065 3.793 - 3.816: 68.7783% ( 453) 00:18:33.065 3.816 - 3.840: 72.0122% ( 414) 00:18:33.065 3.840 - 3.864: 75.6835% ( 470) 00:18:33.065 3.864 - 3.887: 79.2923% ( 462) 00:18:33.065 3.887 - 3.911: 82.4637% ( 406) 00:18:33.065 3.911 - 3.935: 85.1820% ( 348) 00:18:33.065 3.935 - 3.959: 87.3301% ( 275) 00:18:33.065 3.959 - 3.982: 89.3845% ( 263) 00:18:33.065 3.982 - 4.006: 90.8452% ( 187) 00:18:33.065 4.006 - 4.030: 92.0872% ( 159) 00:18:33.065 4.030 - 4.053: 93.0323% ( 121) 00:18:33.065 4.053 - 4.077: 93.9463% ( 117) 00:18:33.065 4.077 - 4.101: 94.7118% ( 98) 00:18:33.065 4.101 - 4.124: 95.3210% ( 78) 00:18:33.065 4.124 - 4.148: 95.7585% ( 56) 00:18:33.065 4.148 - 4.172: 96.0865% ( 42) 00:18:33.065 4.172 - 4.196: 96.3365% ( 32) 00:18:33.065 4.196 - 4.219: 96.6177% ( 36) 00:18:33.065 4.219 - 4.243: 96.7896% ( 22) 00:18:33.065 4.243 - 4.267: 96.8677% ( 10) 00:18:33.065 4.267 - 4.290: 96.9848% ( 15) 00:18:33.065 4.290 - 4.314: 97.1020% ( 15) 00:18:33.065 4.314 - 4.338: 97.1879% ( 11) 00:18:33.065 4.338 - 4.361: 97.2582% ( 9) 00:18:33.065 4.361 - 4.385: 97.3207% ( 8) 00:18:33.065 4.385 - 4.409: 97.3598% ( 5) 00:18:33.065 4.409 - 4.433: 97.3754% ( 2) 00:18:33.065 4.433 - 4.456: 97.4067% ( 4) 00:18:33.065 4.456 - 4.480: 97.4145% ( 1) 00:18:33.065 4.527 - 4.551: 97.4301% ( 2) 00:18:33.065 4.622 - 4.646: 97.4379% ( 1) 00:18:33.065 4.646 - 4.670: 97.4535% ( 2) 00:18:33.065 4.670 - 4.693: 97.4613% ( 1) 00:18:33.065 4.693 - 4.717: 97.4691% ( 1) 00:18:33.065 4.717 - 4.741: 97.5004% ( 4) 00:18:33.065 4.741 - 4.764: 97.5082% ( 1) 00:18:33.065 4.764 - 4.788: 97.5473% ( 5) 00:18:33.065 4.788 - 4.812: 97.5863% ( 5) 00:18:33.065 4.812 - 4.836: 97.6097% ( 3) 00:18:33.065 4.836 - 4.859: 97.6410% ( 4) 00:18:33.065 4.859 - 4.883: 97.6800% ( 5) 00:18:33.065 4.883 - 4.907: 97.7269% ( 6) 00:18:33.065 4.907 - 4.930: 97.7660% ( 5) 00:18:33.065 4.930 - 4.954: 97.7894% ( 3) 00:18:33.065 4.954 - 4.978: 97.8285% ( 5) 00:18:33.065 4.978 - 5.001: 97.9066% ( 10) 00:18:33.065 5.001 - 5.025: 97.9456% ( 5) 00:18:33.065 5.025 - 5.049: 97.9691% ( 3) 00:18:33.065 5.049 - 5.073: 97.9925% ( 3) 00:18:33.065 5.073 - 5.096: 98.0081% ( 2) 00:18:33.065 5.096 - 5.120: 98.0550% ( 6) 00:18:33.065 5.120 - 5.144: 98.1097% ( 7) 00:18:33.065 5.144 - 5.167: 98.1253% ( 2) 00:18:33.065 5.167 - 5.191: 98.1643% ( 5) 00:18:33.065 5.191 - 5.215: 98.2268% ( 8) 00:18:33.065 5.215 - 5.239: 98.2503% ( 3) 00:18:33.065 5.239 - 5.262: 98.2737% ( 3) 00:18:33.065 5.286 - 5.310: 98.2815% ( 1) 00:18:33.065 5.333 - 5.357: 98.2893% ( 1) 00:18:33.065 5.357 - 5.381: 98.3050% ( 2) 00:18:33.065 5.381 - 5.404: 98.3128% ( 1) 00:18:33.065 5.404 - 5.428: 98.3206% ( 1) 00:18:33.065 5.428 - 5.452: 98.3284% ( 1) 00:18:33.065 5.452 - 5.476: 98.3362% ( 1) 00:18:33.065 5.476 - 5.499: 98.3596% ( 3) 00:18:33.065 5.547 - 5.570: 98.3674% ( 1) 00:18:33.065 5.641 - 5.665: 98.3831% ( 2) 00:18:33.065 5.665 - 5.689: 98.3909% ( 1) 00:18:33.065 5.807 - 5.831: 98.3987% ( 1) 00:18:33.065 5.855 - 5.879: 98.4065% ( 1) 00:18:33.065 5.879 - 5.902: 98.4143% ( 1) 00:18:33.065 5.973 - 5.997: 98.4221% ( 1) 00:18:33.065 6.068 - 6.116: 98.4299% ( 1) 00:18:33.065 6.116 - 6.163: 98.4456% ( 2) 00:18:33.065 6.258 - 6.305: 98.4534% ( 1) 00:18:33.065 6.305 - 6.353: 98.4612% ( 1) 00:18:33.065 6.637 - 6.684: 98.4690% ( 1) 00:18:33.065 6.684 - 6.732: 98.4846% ( 2) 00:18:33.065 6.732 - 6.779: 98.5002% ( 2) 00:18:33.065 6.779 - 6.827: 98.5159% ( 2) 00:18:33.065 6.827 - 6.874: 98.5315% ( 2) 00:18:33.065 6.969 - 7.016: 98.5393% ( 1) 00:18:33.065 7.016 - 7.064: 98.5705% ( 4) 00:18:33.065 7.064 - 7.111: 98.5783% ( 1) 00:18:33.065 7.111 - 7.159: 98.5862% ( 1) 00:18:33.065 7.253 - 7.301: 98.5940% ( 1) 00:18:33.065 7.301 - 7.348: 98.6018% ( 1) 00:18:33.065 7.348 - 7.396: 98.6096% ( 1) 00:18:33.065 7.396 - 7.443: 98.6330% ( 3) 00:18:33.065 7.490 - 7.538: 98.6408% ( 1) 00:18:33.065 7.538 - 7.585: 98.6486% ( 1) 00:18:33.065 7.680 - 7.727: 98.6643% ( 2) 00:18:33.065 7.727 - 7.775: 98.6721% ( 1) 00:18:33.065 7.775 - 7.822: 98.6799% ( 1) 00:18:33.065 7.917 - 7.964: 98.6955% ( 2) 00:18:33.065 7.964 - 8.012: 98.7033% ( 1) 00:18:33.065 8.012 - 8.059: 98.7111% ( 1) 00:18:33.065 8.059 - 8.107: 98.7190% ( 1) 00:18:33.065 8.154 - 8.201: 98.7424% ( 3) 00:18:33.065 8.201 - 8.249: 98.7580% ( 2) 00:18:33.065 8.439 - 8.486: 98.7658% ( 1) 00:18:33.065 8.533 - 8.581: 98.7736% ( 1) 00:18:33.065 8.581 - 8.628: 98.7893% ( 2) 00:18:33.065 8.676 - 8.723: 98.8049% ( 2) 00:18:33.065 8.723 - 8.770: 98.8127% ( 1) 00:18:33.065 8.770 - 8.818: 98.8205% ( 1) 00:18:33.065 8.818 - 8.865: 98.8283% ( 1) 00:18:33.065 8.865 - 8.913: 98.8361% ( 1) 00:18:33.065 9.055 - 9.102: 98.8439% ( 1) 00:18:33.065 9.102 - 9.150: 98.8517% ( 1) 00:18:33.065 9.150 - 9.197: 98.8674% ( 2) 00:18:33.065 9.339 - 9.387: 98.8752% ( 1) 00:18:33.065 9.481 - 9.529: 98.8830% ( 1) 00:18:33.065 9.529 - 9.576: 98.8908% ( 1) 00:18:33.065 9.908 - 9.956: 98.8986% ( 1) 00:18:33.065 10.003 - 10.050: 98.9064% ( 1) 00:18:33.065 10.050 - 10.098: 98.9142% ( 1) 00:18:33.065 10.193 - 10.240: 98.9455% ( 4) 00:18:33.065 10.287 - 10.335: 98.9533% ( 1) 00:18:33.065 10.335 - 10.382: 98.9611% ( 1) 00:18:33.065 10.382 - 10.430: 98.9689% ( 1) 00:18:33.065 10.667 - 10.714: 98.9767% ( 1) 00:18:33.065 10.856 - 10.904: 98.9845% ( 1) 00:18:33.065 10.904 - 10.951: 98.9923% ( 1) 00:18:33.065 10.951 - 10.999: 99.0002% ( 1) 00:18:33.065 10.999 - 11.046: 99.0080% ( 1) 00:18:33.065 11.046 - 11.093: 99.0158% ( 1) 00:18:33.065 11.141 - 11.188: 99.0236% ( 1) 00:18:33.065 11.188 - 11.236: 99.0314% ( 1) 00:18:33.065 11.236 - 11.283: 99.0392% ( 1) 00:18:33.065 11.378 - 11.425: 99.0470% ( 1) 00:18:33.065 11.662 - 11.710: 99.0548% ( 1) 00:18:33.065 11.852 - 11.899: 99.0705% ( 2) 00:18:33.065 12.136 - 12.231: 99.0783% ( 1) 00:18:33.065 12.231 - 12.326: 99.0939% ( 2) 00:18:33.065 12.421 - 12.516: 99.1017% ( 1) 00:18:33.065 12.990 - 13.084: 99.1173% ( 2) 00:18:33.065 13.084 - 13.179: 99.1251% ( 1) 00:18:33.065 13.369 - 13.464: 99.1329% ( 1) 00:18:33.065 13.464 - 13.559: 99.1486% ( 2) 00:18:33.065 13.653 - 13.748: 99.1564% ( 1) 00:18:33.065 13.748 - 13.843: 99.1720% ( 2) 00:18:33.065 13.938 - 14.033: 99.1798% ( 1) 00:18:33.065 14.127 - 14.222: 99.1876% ( 1) 00:18:33.065 14.317 - 14.412: 99.1954% ( 1) 00:18:33.065 14.412 - 14.507: 99.2032% ( 1) 00:18:33.065 14.601 - 14.696: 99.2111% ( 1) 00:18:33.065 14.696 - 14.791: 99.2189% ( 1) 00:18:33.065 14.791 - 14.886: 99.2267% ( 1) 00:18:33.065 14.981 - 15.076: 99.2345% ( 1) 00:18:33.065 17.067 - 17.161: 99.2423% ( 1) 00:18:33.065 17.351 - 17.446: 99.2501% ( 1) 00:18:33.065 17.446 - 17.541: 99.3048% ( 7) 00:18:33.065 17.541 - 17.636: 99.3204% ( 2) 00:18:33.065 17.636 - 17.730: 99.3595% ( 5) 00:18:33.065 17.730 - 17.825: 99.3985% ( 5) 00:18:33.065 17.825 - 17.920: 99.4454% ( 6) 00:18:33.065 17.920 - 18.015: 99.4766% ( 4) 00:18:33.065 18.015 - 18.110: 99.5313% ( 7) 00:18:33.065 18.110 - 18.204: 99.5704% ( 5) 00:18:33.065 18.204 - 18.299: 99.6329% ( 8) 00:18:33.065 18.299 - 18.394: 99.6719% ( 5) 00:18:33.065 18.394 - 18.489: 99.7422% ( 9) 00:18:33.065 18.489 - 18.584: 99.7657% ( 3) 00:18:33.065 18.584 - 18.679: 99.7813% ( 2) 00:18:33.065 18.679 - 18.773: 99.8047% ( 3) 00:18:33.065 18.773 - 18.868: 99.8282% ( 3) 00:18:33.065 18.963 - 19.058: 99.8360% ( 1) 00:18:33.065 19.342 - 19.437: 99.8438% ( 1) 00:18:33.066 19.437 - 19.532: 99.8516% ( 1) 00:18:33.066 20.006 - 20.101: 99.8594% ( 1) 00:18:33.066 20.101 - 20.196: 99.8672% ( 1) 00:18:33.066 22.471 - 22.566: 99.8750% ( 1) 00:18:33.066 22.756 - 22.850: 99.8828% ( 1) 00:18:33.066 23.230 - 23.324: 99.8906% ( 1) 00:18:33.066 23.988 - 24.083: 99.8985% ( 1) 00:18:33.066 24.083 - 24.178: 99.9063% ( 1) 00:18:33.066 25.600 - 25.790: 99.9141% ( 1) 00:18:33.066 26.738 - 26.927: 99.9219% ( 1) 00:18:33.066 3980.705 - 4004.978: 99.9453% ( 3) 00:18:33.066 4004.978 - 4029.250: 99.9922% ( 6) 00:18:33.066 6990.507 - 7039.052: 100.0000% ( 1) 00:18:33.066 00:18:33.066 Complete histogram 00:18:33.066 ================== 00:18:33.066 Range in us Cumulative Count 00:18:33.066 2.062 - 2.074: 0.0156% ( 2) 00:18:33.066 2.074 - 2.086: 15.9038% ( 2034) 00:18:33.066 2.086 - 2.098: 50.5702% ( 4438) 00:18:33.066 2.098 - 2.110: 53.6010% ( 388) 00:18:33.066 2.110 - 2.121: 56.8739% ( 419) 00:18:33.066 2.121 - 2.133: 59.9125% ( 389) 00:18:33.066 2.133 - 2.145: 61.6466% ( 222) 00:18:33.066 2.145 - 2.157: 72.2543% ( 1358) 00:18:33.066 2.157 - 2.169: 81.7997% ( 1222) 00:18:33.066 2.169 - 2.181: 82.7605% ( 123) 00:18:33.066 2.181 - 2.193: 84.6586% ( 243) 00:18:33.066 2.193 - 2.204: 86.0959% ( 184) 00:18:33.066 2.204 - 2.216: 86.6583% ( 72) 00:18:33.066 2.216 - 2.228: 88.7283% ( 265) 00:18:33.066 2.228 - 2.240: 91.3373% ( 334) 00:18:33.066 2.240 - 2.252: 93.2667% ( 247) 00:18:33.066 2.252 - 2.264: 93.8838% ( 79) 00:18:33.066 2.264 - 2.276: 94.1884% ( 39) 00:18:33.066 2.276 - 2.287: 94.4227% ( 30) 00:18:33.066 2.287 - 2.299: 94.6571% ( 30) 00:18:33.066 2.299 - 2.311: 94.9930% ( 43) 00:18:33.066 2.311 - 2.323: 95.5710% ( 74) 00:18:33.066 2.323 - 2.335: 95.6882% ( 15) 00:18:33.066 2.335 - 2.347: 95.7116% ( 3) 00:18:33.066 2.347 - 2.359: 95.7819% ( 9) 00:18:33.066 2.359 - 2.370: 95.8913% ( 14) 00:18:33.066 2.370 - 2.382: 96.0865% ( 25) 00:18:33.066 2.382 - 2.394: 96.4537% ( 47) 00:18:33.066 2.394 - 2.406: 96.8677% ( 53) 00:18:33.066 2.406 - 2.418: 97.1098% ( 31) 00:18:33.066 2.418 - 2.430: 97.3051% ( 25) 00:18:33.066 2.430 - 2.441: 97.5785% ( 35) 00:18:33.066 2.441 - 2.453: 97.7894% ( 27) 00:18:33.066 2.453 - 2.465: 97.9534% ( 21) 00:18:33.066 2.465 - 2.477: 98.1097% ( 20) 00:18:33.066 2.477 - 2.489: 98.2034% ( 12) 00:18:33.066 2.489 - 2.501: 98.2659% ( 8) 00:18:33.066 2.501 - 2.513: 98.2971% ( 4) 00:18:33.066 2.513 - 2.524: 98.3831% ( 11) 00:18:33.066 2.524 - 2.536: 98.4299% ( 6) 00:18:33.066 2.536 - 2.548: 98.4534% ( 3) 00:18:33.066 2.548 - 2.560: 98.5002% ( 6) 00:18:33.066 2.560 - 2.572: 98.5159% ( 2) 00:18:33.066 2.572 - 2.584: 98.5237% ( 1) 00:18:33.066 2.584 - 2.596: 98.5393% ( 2) 00:18:33.066 2.714 - 2.726: 98.5471% ( 1) 00:18:33.066 2.726 - 2.738: 98.5549% ( 1) 00:18:33.066 2.785 - 2.797: 98.5627% ( 1) 00:18:33.066 2.797 - 2.809: 98.5705% ( 1) 00:18:33.066 2.809 - 2.821: 98.5783% ( 1) 00:18:33.066 2.880 - 2.892: 98.5862% ( 1) 00:18:33.066 2.916 - 2.927: 98.5940% ( 1) 00:18:33.066 3.390 - 3.413: 98.6096% ( 2) 00:18:33.066 3.413 - 3.437: 98.6174% ( 1) 00:18:33.066 3.461 - 3.484: 98.6252% ( 1) 00:18:33.066 3.484 - 3.508: 98.6330% ( 1) 00:18:33.066 3.508 - 3.532: 9[2024-10-14 13:29:24.551180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:33.066 8.6486% ( 2) 00:18:33.066 3.603 - 3.627: 98.6643% ( 2) 00:18:33.066 3.627 - 3.650: 98.6721% ( 1) 00:18:33.066 3.745 - 3.769: 98.6799% ( 1) 00:18:33.066 3.769 - 3.793: 98.6877% ( 1) 00:18:33.066 3.793 - 3.816: 98.6955% ( 1) 00:18:33.066 3.840 - 3.864: 98.7033% ( 1) 00:18:33.066 3.911 - 3.935: 98.7111% ( 1) 00:18:33.066 3.935 - 3.959: 98.7268% ( 2) 00:18:33.066 3.959 - 3.982: 98.7346% ( 1) 00:18:33.066 3.982 - 4.006: 98.7424% ( 1) 00:18:33.066 4.006 - 4.030: 98.7502% ( 1) 00:18:33.066 4.053 - 4.077: 98.7658% ( 2) 00:18:33.066 4.243 - 4.267: 98.7736% ( 1) 00:18:33.066 4.267 - 4.290: 98.7814% ( 1) 00:18:33.066 4.290 - 4.314: 98.7893% ( 1) 00:18:33.066 4.599 - 4.622: 98.7971% ( 1) 00:18:33.066 5.310 - 5.333: 98.8049% ( 1) 00:18:33.066 5.381 - 5.404: 98.8127% ( 1) 00:18:33.066 5.665 - 5.689: 98.8205% ( 1) 00:18:33.066 5.736 - 5.760: 98.8283% ( 1) 00:18:33.066 6.116 - 6.163: 98.8361% ( 1) 00:18:33.066 6.163 - 6.210: 98.8439% ( 1) 00:18:33.066 6.210 - 6.258: 98.8517% ( 1) 00:18:33.066 6.542 - 6.590: 98.8596% ( 1) 00:18:33.066 6.684 - 6.732: 98.8674% ( 1) 00:18:33.066 6.779 - 6.827: 98.8752% ( 1) 00:18:33.066 6.827 - 6.874: 98.8830% ( 1) 00:18:33.066 7.206 - 7.253: 98.8986% ( 2) 00:18:33.066 7.680 - 7.727: 98.9064% ( 1) 00:18:33.066 8.486 - 8.533: 98.9142% ( 1) 00:18:33.066 15.550 - 15.644: 98.9220% ( 1) 00:18:33.066 15.644 - 15.739: 98.9299% ( 1) 00:18:33.066 15.739 - 15.834: 98.9533% ( 3) 00:18:33.066 15.834 - 15.929: 98.9767% ( 3) 00:18:33.066 15.929 - 16.024: 99.0002% ( 3) 00:18:33.066 16.024 - 16.119: 99.0236% ( 3) 00:18:33.066 16.119 - 16.213: 99.0314% ( 1) 00:18:33.066 16.213 - 16.308: 99.0392% ( 1) 00:18:33.066 16.308 - 16.403: 99.0470% ( 1) 00:18:33.066 16.403 - 16.498: 99.0626% ( 2) 00:18:33.066 16.498 - 16.593: 99.0783% ( 2) 00:18:33.066 16.593 - 16.687: 99.1095% ( 4) 00:18:33.066 16.687 - 16.782: 99.1486% ( 5) 00:18:33.066 16.782 - 16.877: 99.1798% ( 4) 00:18:33.066 16.877 - 16.972: 99.1954% ( 2) 00:18:33.066 16.972 - 17.067: 99.2111% ( 2) 00:18:33.066 17.161 - 17.256: 99.2345% ( 3) 00:18:33.066 17.256 - 17.351: 99.2423% ( 1) 00:18:33.066 17.446 - 17.541: 99.2501% ( 1) 00:18:33.066 17.541 - 17.636: 99.2579% ( 1) 00:18:33.066 17.730 - 17.825: 99.2657% ( 1) 00:18:33.066 17.920 - 18.015: 99.2892% ( 3) 00:18:33.066 18.015 - 18.110: 99.2970% ( 1) 00:18:33.066 18.110 - 18.204: 99.3048% ( 1) 00:18:33.066 18.299 - 18.394: 99.3126% ( 1) 00:18:33.066 18.394 - 18.489: 99.3282% ( 2) 00:18:33.066 18.584 - 18.679: 99.3360% ( 1) 00:18:33.066 18.679 - 18.773: 99.3439% ( 1) 00:18:33.066 18.773 - 18.868: 99.3517% ( 1) 00:18:33.066 18.868 - 18.963: 99.3595% ( 1) 00:18:33.066 22.281 - 22.376: 99.3673% ( 1) 00:18:33.066 22.471 - 22.566: 99.3751% ( 1) 00:18:33.066 22.756 - 22.850: 99.3829% ( 1) 00:18:33.066 3495.253 - 3519.526: 99.3907% ( 1) 00:18:33.066 3980.705 - 4004.978: 99.7422% ( 45) 00:18:33.066 4004.978 - 4029.250: 100.0000% ( 33) 00:18:33.066 00:18:33.066 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:33.066 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:33.066 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:33.066 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:33.066 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:33.066 [ 00:18:33.066 { 00:18:33.066 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:33.066 "subtype": "Discovery", 00:18:33.066 "listen_addresses": [], 00:18:33.066 "allow_any_host": true, 00:18:33.066 "hosts": [] 00:18:33.066 }, 00:18:33.066 { 00:18:33.066 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:33.066 "subtype": "NVMe", 00:18:33.066 "listen_addresses": [ 00:18:33.066 { 00:18:33.066 "trtype": "VFIOUSER", 00:18:33.066 "adrfam": "IPv4", 00:18:33.066 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:33.066 "trsvcid": "0" 00:18:33.066 } 00:18:33.066 ], 00:18:33.066 "allow_any_host": true, 00:18:33.066 "hosts": [], 00:18:33.066 "serial_number": "SPDK1", 00:18:33.066 "model_number": "SPDK bdev Controller", 00:18:33.066 "max_namespaces": 32, 00:18:33.066 "min_cntlid": 1, 00:18:33.066 "max_cntlid": 65519, 00:18:33.066 "namespaces": [ 00:18:33.066 { 00:18:33.066 "nsid": 1, 00:18:33.066 "bdev_name": "Malloc1", 00:18:33.066 "name": "Malloc1", 00:18:33.066 "nguid": "00CEC097A9BE4E1999A5A017693737F3", 00:18:33.066 "uuid": "00cec097-a9be-4e19-99a5-a017693737f3" 00:18:33.066 }, 00:18:33.066 { 00:18:33.066 "nsid": 2, 00:18:33.066 "bdev_name": "Malloc3", 00:18:33.066 "name": "Malloc3", 00:18:33.066 "nguid": "0E8FBD62CCE64E2785060CB2C3EB4939", 00:18:33.066 "uuid": "0e8fbd62-cce6-4e27-8506-0cb2c3eb4939" 00:18:33.066 } 00:18:33.066 ] 00:18:33.066 }, 00:18:33.066 { 00:18:33.066 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:33.066 "subtype": "NVMe", 00:18:33.066 "listen_addresses": [ 00:18:33.066 { 00:18:33.066 "trtype": "VFIOUSER", 00:18:33.066 "adrfam": "IPv4", 00:18:33.066 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:33.066 "trsvcid": "0" 00:18:33.066 } 00:18:33.066 ], 00:18:33.066 "allow_any_host": true, 00:18:33.066 "hosts": [], 00:18:33.066 "serial_number": "SPDK2", 00:18:33.066 "model_number": "SPDK bdev Controller", 00:18:33.066 "max_namespaces": 32, 00:18:33.066 "min_cntlid": 1, 00:18:33.066 "max_cntlid": 65519, 00:18:33.066 "namespaces": [ 00:18:33.066 { 00:18:33.066 "nsid": 1, 00:18:33.066 "bdev_name": "Malloc2", 00:18:33.067 "name": "Malloc2", 00:18:33.067 "nguid": "F69EFE3B4D764412B2EE1641DC999D19", 00:18:33.067 "uuid": "f69efe3b-4d76-4412-b2ee-1641dc999d19" 00:18:33.067 } 00:18:33.067 ] 00:18:33.067 } 00:18:33.067 ] 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=233075 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:18:33.326 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:18:33.326 [2024-10-14 13:29:25.090620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:33.326 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:33.585 Malloc4 00:18:33.843 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:34.102 [2024-10-14 13:29:25.703244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:34.102 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:34.102 Asynchronous Event Request test 00:18:34.102 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:34.102 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:34.102 Registering asynchronous event callbacks... 00:18:34.102 Starting namespace attribute notice tests for all controllers... 00:18:34.102 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:34.102 aer_cb - Changed Namespace 00:18:34.102 Cleaning up... 00:18:34.360 [ 00:18:34.360 { 00:18:34.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:34.360 "subtype": "Discovery", 00:18:34.360 "listen_addresses": [], 00:18:34.360 "allow_any_host": true, 00:18:34.360 "hosts": [] 00:18:34.360 }, 00:18:34.360 { 00:18:34.360 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:34.360 "subtype": "NVMe", 00:18:34.360 "listen_addresses": [ 00:18:34.360 { 00:18:34.360 "trtype": "VFIOUSER", 00:18:34.360 "adrfam": "IPv4", 00:18:34.360 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:34.360 "trsvcid": "0" 00:18:34.360 } 00:18:34.360 ], 00:18:34.360 "allow_any_host": true, 00:18:34.360 "hosts": [], 00:18:34.360 "serial_number": "SPDK1", 00:18:34.360 "model_number": "SPDK bdev Controller", 00:18:34.360 "max_namespaces": 32, 00:18:34.360 "min_cntlid": 1, 00:18:34.360 "max_cntlid": 65519, 00:18:34.360 "namespaces": [ 00:18:34.360 { 00:18:34.360 "nsid": 1, 00:18:34.360 "bdev_name": "Malloc1", 00:18:34.360 "name": "Malloc1", 00:18:34.360 "nguid": "00CEC097A9BE4E1999A5A017693737F3", 00:18:34.360 "uuid": "00cec097-a9be-4e19-99a5-a017693737f3" 00:18:34.360 }, 00:18:34.360 { 00:18:34.360 "nsid": 2, 00:18:34.360 "bdev_name": "Malloc3", 00:18:34.360 "name": "Malloc3", 00:18:34.360 "nguid": "0E8FBD62CCE64E2785060CB2C3EB4939", 00:18:34.360 "uuid": "0e8fbd62-cce6-4e27-8506-0cb2c3eb4939" 00:18:34.360 } 00:18:34.360 ] 00:18:34.360 }, 00:18:34.360 { 00:18:34.360 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:34.360 "subtype": "NVMe", 00:18:34.360 "listen_addresses": [ 00:18:34.360 { 00:18:34.360 "trtype": "VFIOUSER", 00:18:34.360 "adrfam": "IPv4", 00:18:34.360 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:34.360 "trsvcid": "0" 00:18:34.360 } 00:18:34.360 ], 00:18:34.360 "allow_any_host": true, 00:18:34.360 "hosts": [], 00:18:34.360 "serial_number": "SPDK2", 00:18:34.360 "model_number": "SPDK bdev Controller", 00:18:34.360 "max_namespaces": 32, 00:18:34.360 "min_cntlid": 1, 00:18:34.360 "max_cntlid": 65519, 00:18:34.360 "namespaces": [ 00:18:34.360 { 00:18:34.360 "nsid": 1, 00:18:34.360 "bdev_name": "Malloc2", 00:18:34.360 "name": "Malloc2", 00:18:34.360 "nguid": "F69EFE3B4D764412B2EE1641DC999D19", 00:18:34.360 "uuid": "f69efe3b-4d76-4412-b2ee-1641dc999d19" 00:18:34.360 }, 00:18:34.360 { 00:18:34.360 "nsid": 2, 00:18:34.360 "bdev_name": "Malloc4", 00:18:34.360 "name": "Malloc4", 00:18:34.360 "nguid": "E318A77D4E524AF98FBC792CED541E55", 00:18:34.360 "uuid": "e318a77d-4e52-4af9-8fbc-792ced541e55" 00:18:34.360 } 00:18:34.360 ] 00:18:34.360 } 00:18:34.360 ] 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 233075 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 227355 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 227355 ']' 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 227355 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 227355 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 227355' 00:18:34.361 killing process with pid 227355 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 227355 00:18:34.361 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 227355 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=233232 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 233232' 00:18:34.619 Process pid: 233232 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 233232 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 233232 ']' 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.619 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:34.619 [2024-10-14 13:29:26.395692] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:34.619 [2024-10-14 13:29:26.396731] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:18:34.619 [2024-10-14 13:29:26.396801] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.619 [2024-10-14 13:29:26.455339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.877 [2024-10-14 13:29:26.498436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.877 [2024-10-14 13:29:26.498492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.877 [2024-10-14 13:29:26.498520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.877 [2024-10-14 13:29:26.498530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.877 [2024-10-14 13:29:26.498539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.877 [2024-10-14 13:29:26.499939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.877 [2024-10-14 13:29:26.500048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.877 [2024-10-14 13:29:26.500150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.877 [2024-10-14 13:29:26.500154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.877 [2024-10-14 13:29:26.578925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:34.877 [2024-10-14 13:29:26.579156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:34.877 [2024-10-14 13:29:26.579396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:34.877 [2024-10-14 13:29:26.579967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:34.877 [2024-10-14 13:29:26.580213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:34.877 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.877 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:34.877 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:35.865 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:36.124 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:36.124 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:36.124 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.124 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:36.124 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:36.692 Malloc1 00:18:36.692 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:36.952 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:37.210 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:37.469 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.469 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:37.469 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:37.728 Malloc2 00:18:37.728 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:37.987 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:38.246 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 233232 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 233232 ']' 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 233232 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233232 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233232' 00:18:38.504 killing process with pid 233232 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 233232 00:18:38.504 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 233232 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:38.764 00:18:38.764 real 0m53.860s 00:18:38.764 user 3m28.336s 00:18:38.764 sys 0m4.027s 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:38.764 ************************************ 00:18:38.764 END TEST nvmf_vfio_user 00:18:38.764 ************************************ 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.764 ************************************ 00:18:38.764 START TEST nvmf_vfio_user_nvme_compliance 00:18:38.764 ************************************ 00:18:38.764 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:39.024 * Looking for test storage... 00:18:39.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.024 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.025 --rc genhtml_branch_coverage=1 00:18:39.025 --rc genhtml_function_coverage=1 00:18:39.025 --rc genhtml_legend=1 00:18:39.025 --rc geninfo_all_blocks=1 00:18:39.025 --rc geninfo_unexecuted_blocks=1 00:18:39.025 00:18:39.025 ' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.025 --rc genhtml_branch_coverage=1 00:18:39.025 --rc genhtml_function_coverage=1 00:18:39.025 --rc genhtml_legend=1 00:18:39.025 --rc geninfo_all_blocks=1 00:18:39.025 --rc geninfo_unexecuted_blocks=1 00:18:39.025 00:18:39.025 ' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.025 --rc genhtml_branch_coverage=1 00:18:39.025 --rc genhtml_function_coverage=1 00:18:39.025 --rc genhtml_legend=1 00:18:39.025 --rc geninfo_all_blocks=1 00:18:39.025 --rc geninfo_unexecuted_blocks=1 00:18:39.025 00:18:39.025 ' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.025 --rc genhtml_branch_coverage=1 00:18:39.025 --rc genhtml_function_coverage=1 00:18:39.025 --rc genhtml_legend=1 00:18:39.025 --rc geninfo_all_blocks=1 00:18:39.025 --rc geninfo_unexecuted_blocks=1 00:18:39.025 00:18:39.025 ' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=233834 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 233834' 00:18:39.025 Process pid: 233834 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 233834 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 233834 ']' 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.025 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:39.025 [2024-10-14 13:29:30.778195] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:18:39.025 [2024-10-14 13:29:30.778278] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.025 [2024-10-14 13:29:30.840379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.286 [2024-10-14 13:29:30.890683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.286 [2024-10-14 13:29:30.890754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.286 [2024-10-14 13:29:30.890768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.286 [2024-10-14 13:29:30.890793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.286 [2024-10-14 13:29:30.890803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.286 [2024-10-14 13:29:30.892297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.286 [2024-10-14 13:29:30.892360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.286 [2024-10-14 13:29:30.892363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.286 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.286 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:39.286 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.221 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 malloc0 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:40.481 00:18:40.481 00:18:40.481 CUnit - A unit testing framework for C - Version 2.1-3 00:18:40.481 http://cunit.sourceforge.net/ 00:18:40.481 00:18:40.481 00:18:40.481 Suite: nvme_compliance 00:18:40.481 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-14 13:29:32.269685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.481 [2024-10-14 13:29:32.271184] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:40.481 [2024-10-14 13:29:32.271209] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:40.481 [2024-10-14 13:29:32.271221] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:40.481 [2024-10-14 13:29:32.272702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.481 passed 00:18:40.740 Test: admin_identify_ctrlr_verify_fused ...[2024-10-14 13:29:32.357284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.740 [2024-10-14 13:29:32.360310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.740 passed 00:18:40.740 Test: admin_identify_ns ...[2024-10-14 13:29:32.445632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.740 [2024-10-14 13:29:32.505158] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:40.740 [2024-10-14 13:29:32.513163] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:40.740 [2024-10-14 13:29:32.534268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.740 passed 00:18:40.998 Test: admin_get_features_mandatory_features ...[2024-10-14 13:29:32.620391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.998 [2024-10-14 13:29:32.623410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.998 passed 00:18:40.998 Test: admin_get_features_optional_features ...[2024-10-14 13:29:32.705934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.998 [2024-10-14 13:29:32.708952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.998 passed 00:18:40.998 Test: admin_set_features_number_of_queues ...[2024-10-14 13:29:32.793101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.257 [2024-10-14 13:29:32.895251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.257 passed 00:18:41.257 Test: admin_get_log_page_mandatory_logs ...[2024-10-14 13:29:32.980713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.257 [2024-10-14 13:29:32.983736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.257 passed 00:18:41.257 Test: admin_get_log_page_with_lpo ...[2024-10-14 13:29:33.065721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.517 [2024-10-14 13:29:33.134144] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:41.517 [2024-10-14 13:29:33.147245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.517 passed 00:18:41.517 Test: fabric_property_get ...[2024-10-14 13:29:33.227637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.517 [2024-10-14 13:29:33.228908] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:41.517 [2024-10-14 13:29:33.231661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.517 passed 00:18:41.517 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-14 13:29:33.316225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.517 [2024-10-14 13:29:33.317545] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:41.517 [2024-10-14 13:29:33.319252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.517 passed 00:18:41.776 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-14 13:29:33.404557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.776 [2024-10-14 13:29:33.488142] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:41.776 [2024-10-14 13:29:33.504137] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:41.776 [2024-10-14 13:29:33.509250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.776 passed 00:18:41.776 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-14 13:29:33.592856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.776 [2024-10-14 13:29:33.594142] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:41.776 [2024-10-14 13:29:33.595879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.776 passed 00:18:42.037 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-14 13:29:33.677009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.037 [2024-10-14 13:29:33.754170] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:42.037 [2024-10-14 13:29:33.778158] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:42.037 [2024-10-14 13:29:33.783229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.037 passed 00:18:42.037 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-14 13:29:33.866872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.037 [2024-10-14 13:29:33.868198] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:42.037 [2024-10-14 13:29:33.868261] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:42.037 [2024-10-14 13:29:33.869895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.295 passed 00:18:42.295 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-14 13:29:33.954185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.295 [2024-10-14 13:29:34.048153] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:42.295 [2024-10-14 13:29:34.056166] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:42.295 [2024-10-14 13:29:34.064149] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:42.295 [2024-10-14 13:29:34.072153] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:42.295 [2024-10-14 13:29:34.101270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.295 passed 00:18:42.553 Test: admin_create_io_sq_verify_pc ...[2024-10-14 13:29:34.180818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.553 [2024-10-14 13:29:34.196164] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:42.553 [2024-10-14 13:29:34.213244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.553 passed 00:18:42.553 Test: admin_create_io_qp_max_qps ...[2024-10-14 13:29:34.300850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:43.930 [2024-10-14 13:29:35.396146] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:43.930 [2024-10-14 13:29:35.782813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.188 passed 00:18:44.188 Test: admin_create_io_sq_shared_cq ...[2024-10-14 13:29:35.866146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:44.188 [2024-10-14 13:29:36.000137] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:44.188 [2024-10-14 13:29:36.037229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:44.446 passed 00:18:44.446 00:18:44.446 Run Summary: Type Total Ran Passed Failed Inactive 00:18:44.446 suites 1 1 n/a 0 0 00:18:44.446 tests 18 18 18 0 0 00:18:44.446 asserts 360 360 360 0 n/a 00:18:44.446 00:18:44.446 Elapsed time = 1.561 seconds 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 233834 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 233834 ']' 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 233834 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 233834 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 233834' 00:18:44.446 killing process with pid 233834 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 233834 00:18:44.446 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 233834 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:44.706 00:18:44.706 real 0m5.748s 00:18:44.706 user 0m16.147s 00:18:44.706 sys 0m0.587s 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:44.706 ************************************ 00:18:44.706 END TEST nvmf_vfio_user_nvme_compliance 00:18:44.706 ************************************ 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.706 ************************************ 00:18:44.706 START TEST nvmf_vfio_user_fuzz 00:18:44.706 ************************************ 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:44.706 * Looking for test storage... 00:18:44.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.706 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:44.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.707 --rc genhtml_branch_coverage=1 00:18:44.707 --rc genhtml_function_coverage=1 00:18:44.707 --rc genhtml_legend=1 00:18:44.707 --rc geninfo_all_blocks=1 00:18:44.707 --rc geninfo_unexecuted_blocks=1 00:18:44.707 00:18:44.707 ' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:44.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.707 --rc genhtml_branch_coverage=1 00:18:44.707 --rc genhtml_function_coverage=1 00:18:44.707 --rc genhtml_legend=1 00:18:44.707 --rc geninfo_all_blocks=1 00:18:44.707 --rc geninfo_unexecuted_blocks=1 00:18:44.707 00:18:44.707 ' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:44.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.707 --rc genhtml_branch_coverage=1 00:18:44.707 --rc genhtml_function_coverage=1 00:18:44.707 --rc genhtml_legend=1 00:18:44.707 --rc geninfo_all_blocks=1 00:18:44.707 --rc geninfo_unexecuted_blocks=1 00:18:44.707 00:18:44.707 ' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:44.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.707 --rc genhtml_branch_coverage=1 00:18:44.707 --rc genhtml_function_coverage=1 00:18:44.707 --rc genhtml_legend=1 00:18:44.707 --rc geninfo_all_blocks=1 00:18:44.707 --rc geninfo_unexecuted_blocks=1 00:18:44.707 00:18:44.707 ' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=234569 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 234569' 00:18:44.707 Process pid: 234569 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 234569 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 234569 ']' 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.707 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.708 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.708 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.968 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.968 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:44.968 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 malloc0 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:46.352 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:18.420 Fuzzing completed. Shutting down the fuzz application 00:19:18.420 00:19:18.420 Dumping successful admin opcodes: 00:19:18.420 8, 9, 10, 24, 00:19:18.420 Dumping successful io opcodes: 00:19:18.420 0, 00:19:18.420 NS: 0x20000081ef00 I/O qp, Total commands completed: 684671, total successful commands: 2665, random_seed: 2207792128 00:19:18.420 NS: 0x20000081ef00 admin qp, Total commands completed: 88020, total successful commands: 706, random_seed: 1333535360 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 234569 ']' 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 234569' 00:19:18.420 killing process with pid 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 234569 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:18.420 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:18.421 00:19:18.421 real 0m32.119s 00:19:18.421 user 0m33.292s 00:19:18.421 sys 0m25.680s 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:18.421 ************************************ 00:19:18.421 END TEST nvmf_vfio_user_fuzz 00:19:18.421 ************************************ 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:18.421 ************************************ 00:19:18.421 START TEST nvmf_auth_target 00:19:18.421 ************************************ 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:18.421 * Looking for test storage... 00:19:18.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:18.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.421 --rc genhtml_branch_coverage=1 00:19:18.421 --rc genhtml_function_coverage=1 00:19:18.421 --rc genhtml_legend=1 00:19:18.421 --rc geninfo_all_blocks=1 00:19:18.421 --rc geninfo_unexecuted_blocks=1 00:19:18.421 00:19:18.421 ' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:18.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.421 --rc genhtml_branch_coverage=1 00:19:18.421 --rc genhtml_function_coverage=1 00:19:18.421 --rc genhtml_legend=1 00:19:18.421 --rc geninfo_all_blocks=1 00:19:18.421 --rc geninfo_unexecuted_blocks=1 00:19:18.421 00:19:18.421 ' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:18.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.421 --rc genhtml_branch_coverage=1 00:19:18.421 --rc genhtml_function_coverage=1 00:19:18.421 --rc genhtml_legend=1 00:19:18.421 --rc geninfo_all_blocks=1 00:19:18.421 --rc geninfo_unexecuted_blocks=1 00:19:18.421 00:19:18.421 ' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:18.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.421 --rc genhtml_branch_coverage=1 00:19:18.421 --rc genhtml_function_coverage=1 00:19:18.421 --rc genhtml_legend=1 00:19:18.421 --rc geninfo_all_blocks=1 00:19:18.421 --rc geninfo_unexecuted_blocks=1 00:19:18.421 00:19:18.421 ' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.421 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:18.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:18.422 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.991 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.991 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.992 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:19:19.251 00:19:19.251 --- 10.0.0.2 ping statistics --- 00:19:19.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.251 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:19:19.251 00:19:19.251 --- 10.0.0.1 ping statistics --- 00:19:19.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.251 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=240512 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 240512 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240512 ']' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.251 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=240532 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8980a7ff4e53c94919ac45895e877f1ea9a2ed8fc46ab764 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.wcM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8980a7ff4e53c94919ac45895e877f1ea9a2ed8fc46ab764 0 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8980a7ff4e53c94919ac45895e877f1ea9a2ed8fc46ab764 0 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8980a7ff4e53c94919ac45895e877f1ea9a2ed8fc46ab764 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.wcM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.wcM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wcM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=89e963436d4326b9c8422fb1aed29bb64138dd0e8f2576106e6316a5ef30eaf5 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.1gM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 89e963436d4326b9c8422fb1aed29bb64138dd0e8f2576106e6316a5ef30eaf5 3 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 89e963436d4326b9c8422fb1aed29bb64138dd0e8f2576106e6316a5ef30eaf5 3 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=89e963436d4326b9c8422fb1aed29bb64138dd0e8f2576106e6316a5ef30eaf5 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.1gM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.1gM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1gM 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5f1d273626cf1c783822b440abdde11d 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.bFx 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5f1d273626cf1c783822b440abdde11d 1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5f1d273626cf1c783822b440abdde11d 1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5f1d273626cf1c783822b440abdde11d 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.bFx 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.bFx 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.bFx 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.510 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=fb01657318c082d3b19b8d18bba53724d4896184568c1f0e 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.VHk 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key fb01657318c082d3b19b8d18bba53724d4896184568c1f0e 2 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 fb01657318c082d3b19b8d18bba53724d4896184568c1f0e 2 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=fb01657318c082d3b19b8d18bba53724d4896184568c1f0e 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.VHk 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.VHk 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.VHk 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a02ae692b91cbcf7e7b4c28ef61c6d6ea2d5401fd92defec 00:19:19.769 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ony 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a02ae692b91cbcf7e7b4c28ef61c6d6ea2d5401fd92defec 2 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a02ae692b91cbcf7e7b4c28ef61c6d6ea2d5401fd92defec 2 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a02ae692b91cbcf7e7b4c28ef61c6d6ea2d5401fd92defec 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ony 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ony 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ony 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5d8ca2f0ecddcd78d892754c710fc2d6 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XQt 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5d8ca2f0ecddcd78d892754c710fc2d6 1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5d8ca2f0ecddcd78d892754c710fc2d6 1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5d8ca2f0ecddcd78d892754c710fc2d6 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XQt 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XQt 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XQt 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=573202cb4df2f2bc78e49744bc14a6fee3d48a3d5789864236ce6a74d5168564 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.mbh 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 573202cb4df2f2bc78e49744bc14a6fee3d48a3d5789864236ce6a74d5168564 3 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 573202cb4df2f2bc78e49744bc14a6fee3d48a3d5789864236ce6a74d5168564 3 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=573202cb4df2f2bc78e49744bc14a6fee3d48a3d5789864236ce6a74d5168564 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.mbh 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.mbh 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.mbh 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 240512 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240512 ']' 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.770 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 240532 /var/tmp/host.sock 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 240532 ']' 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:20.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.028 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wcM 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wcM 00:19:20.287 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wcM 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1gM ]] 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gM 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gM 00:19:20.545 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gM 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bFx 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.bFx 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.bFx 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.VHk ]] 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VHk 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VHk 00:19:21.112 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VHk 00:19:21.370 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.370 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ony 00:19:21.370 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.370 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.627 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.627 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ony 00:19:21.627 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ony 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XQt ]] 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XQt 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XQt 00:19:21.885 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XQt 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mbh 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.mbh 00:19:22.143 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.mbh 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.402 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.660 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.918 00:19:22.918 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.918 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.918 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.176 { 00:19:23.176 "cntlid": 1, 00:19:23.176 "qid": 0, 00:19:23.176 "state": "enabled", 00:19:23.176 "thread": "nvmf_tgt_poll_group_000", 00:19:23.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.176 "listen_address": { 00:19:23.176 "trtype": "TCP", 00:19:23.176 "adrfam": "IPv4", 00:19:23.176 "traddr": "10.0.0.2", 00:19:23.176 "trsvcid": "4420" 00:19:23.176 }, 00:19:23.176 "peer_address": { 00:19:23.176 "trtype": "TCP", 00:19:23.176 "adrfam": "IPv4", 00:19:23.176 "traddr": "10.0.0.1", 00:19:23.176 "trsvcid": "53378" 00:19:23.176 }, 00:19:23.176 "auth": { 00:19:23.176 "state": "completed", 00:19:23.176 "digest": "sha256", 00:19:23.176 "dhgroup": "null" 00:19:23.176 } 00:19:23.176 } 00:19:23.176 ]' 00:19:23.176 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.176 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.176 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.434 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.434 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.434 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.434 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.434 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.692 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:23.692 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.964 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.964 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:28.964 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.964 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.965 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.965 { 00:19:28.965 "cntlid": 3, 00:19:28.965 "qid": 0, 00:19:28.965 "state": "enabled", 00:19:28.965 "thread": "nvmf_tgt_poll_group_000", 00:19:28.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.965 "listen_address": { 00:19:28.965 "trtype": "TCP", 00:19:28.965 "adrfam": "IPv4", 00:19:28.965 "traddr": "10.0.0.2", 00:19:28.965 "trsvcid": "4420" 00:19:28.965 }, 00:19:28.965 "peer_address": { 00:19:28.965 "trtype": "TCP", 00:19:28.965 "adrfam": "IPv4", 00:19:28.965 "traddr": "10.0.0.1", 00:19:28.965 "trsvcid": "56724" 00:19:28.965 }, 00:19:28.965 "auth": { 00:19:28.965 "state": "completed", 00:19:28.965 "digest": "sha256", 00:19:28.965 "dhgroup": "null" 00:19:28.965 } 00:19:28.965 } 00:19:28.965 ]' 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.965 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.224 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.224 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.224 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.224 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.224 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.482 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:29.482 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.420 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.678 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.936 00:19:30.936 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.936 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.936 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.194 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.194 { 00:19:31.194 "cntlid": 5, 00:19:31.195 "qid": 0, 00:19:31.195 "state": "enabled", 00:19:31.195 "thread": "nvmf_tgt_poll_group_000", 00:19:31.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.195 "listen_address": { 00:19:31.195 "trtype": "TCP", 00:19:31.195 "adrfam": "IPv4", 00:19:31.195 "traddr": "10.0.0.2", 00:19:31.195 "trsvcid": "4420" 00:19:31.195 }, 00:19:31.195 "peer_address": { 00:19:31.195 "trtype": "TCP", 00:19:31.195 "adrfam": "IPv4", 00:19:31.195 "traddr": "10.0.0.1", 00:19:31.195 "trsvcid": "56752" 00:19:31.195 }, 00:19:31.195 "auth": { 00:19:31.195 "state": "completed", 00:19:31.195 "digest": "sha256", 00:19:31.195 "dhgroup": "null" 00:19:31.195 } 00:19:31.195 } 00:19:31.195 ]' 00:19:31.195 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.195 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.195 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.195 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.195 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.453 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.453 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.453 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.710 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:31.710 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.663 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.922 00:19:33.181 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.181 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.181 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.439 { 00:19:33.439 "cntlid": 7, 00:19:33.439 "qid": 0, 00:19:33.439 "state": "enabled", 00:19:33.439 "thread": "nvmf_tgt_poll_group_000", 00:19:33.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.439 "listen_address": { 00:19:33.439 "trtype": "TCP", 00:19:33.439 "adrfam": "IPv4", 00:19:33.439 "traddr": "10.0.0.2", 00:19:33.439 "trsvcid": "4420" 00:19:33.439 }, 00:19:33.439 "peer_address": { 00:19:33.439 "trtype": "TCP", 00:19:33.439 "adrfam": "IPv4", 00:19:33.439 "traddr": "10.0.0.1", 00:19:33.439 "trsvcid": "56780" 00:19:33.439 }, 00:19:33.439 "auth": { 00:19:33.439 "state": "completed", 00:19:33.439 "digest": "sha256", 00:19:33.439 "dhgroup": "null" 00:19:33.439 } 00:19:33.439 } 00:19:33.439 ]' 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.439 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.697 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:33.697 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:34.631 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.632 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.890 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.149 00:19:35.149 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.149 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.149 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.715 { 00:19:35.715 "cntlid": 9, 00:19:35.715 "qid": 0, 00:19:35.715 "state": "enabled", 00:19:35.715 "thread": "nvmf_tgt_poll_group_000", 00:19:35.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.715 "listen_address": { 00:19:35.715 "trtype": "TCP", 00:19:35.715 "adrfam": "IPv4", 00:19:35.715 "traddr": "10.0.0.2", 00:19:35.715 "trsvcid": "4420" 00:19:35.715 }, 00:19:35.715 "peer_address": { 00:19:35.715 "trtype": "TCP", 00:19:35.715 "adrfam": "IPv4", 00:19:35.715 "traddr": "10.0.0.1", 00:19:35.715 "trsvcid": "56798" 00:19:35.715 }, 00:19:35.715 "auth": { 00:19:35.715 "state": "completed", 00:19:35.715 "digest": "sha256", 00:19:35.715 "dhgroup": "ffdhe2048" 00:19:35.715 } 00:19:35.715 } 00:19:35.715 ]' 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.715 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.973 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:35.973 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.906 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.164 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.422 00:19:37.422 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.422 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.422 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.680 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.680 { 00:19:37.680 "cntlid": 11, 00:19:37.680 "qid": 0, 00:19:37.680 "state": "enabled", 00:19:37.680 "thread": "nvmf_tgt_poll_group_000", 00:19:37.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.680 "listen_address": { 00:19:37.680 "trtype": "TCP", 00:19:37.680 "adrfam": "IPv4", 00:19:37.680 "traddr": "10.0.0.2", 00:19:37.680 "trsvcid": "4420" 00:19:37.680 }, 00:19:37.680 "peer_address": { 00:19:37.680 "trtype": "TCP", 00:19:37.680 "adrfam": "IPv4", 00:19:37.680 "traddr": "10.0.0.1", 00:19:37.681 "trsvcid": "47930" 00:19:37.681 }, 00:19:37.681 "auth": { 00:19:37.681 "state": "completed", 00:19:37.681 "digest": "sha256", 00:19:37.681 "dhgroup": "ffdhe2048" 00:19:37.681 } 00:19:37.681 } 00:19:37.681 ]' 00:19:37.681 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.939 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.197 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:38.198 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.132 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.390 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.648 00:19:39.648 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.648 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.648 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.907 { 00:19:39.907 "cntlid": 13, 00:19:39.907 "qid": 0, 00:19:39.907 "state": "enabled", 00:19:39.907 "thread": "nvmf_tgt_poll_group_000", 00:19:39.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.907 "listen_address": { 00:19:39.907 "trtype": "TCP", 00:19:39.907 "adrfam": "IPv4", 00:19:39.907 "traddr": "10.0.0.2", 00:19:39.907 "trsvcid": "4420" 00:19:39.907 }, 00:19:39.907 "peer_address": { 00:19:39.907 "trtype": "TCP", 00:19:39.907 "adrfam": "IPv4", 00:19:39.907 "traddr": "10.0.0.1", 00:19:39.907 "trsvcid": "47954" 00:19:39.907 }, 00:19:39.907 "auth": { 00:19:39.907 "state": "completed", 00:19:39.907 "digest": "sha256", 00:19:39.907 "dhgroup": "ffdhe2048" 00:19:39.907 } 00:19:39.907 } 00:19:39.907 ]' 00:19:39.907 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.165 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.423 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:40.423 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.389 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.647 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.905 00:19:41.905 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.905 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.905 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.163 { 00:19:42.163 "cntlid": 15, 00:19:42.163 "qid": 0, 00:19:42.163 "state": "enabled", 00:19:42.163 "thread": "nvmf_tgt_poll_group_000", 00:19:42.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.163 "listen_address": { 00:19:42.163 "trtype": "TCP", 00:19:42.163 "adrfam": "IPv4", 00:19:42.163 "traddr": "10.0.0.2", 00:19:42.163 "trsvcid": "4420" 00:19:42.163 }, 00:19:42.163 "peer_address": { 00:19:42.163 "trtype": "TCP", 00:19:42.163 "adrfam": "IPv4", 00:19:42.163 "traddr": "10.0.0.1", 00:19:42.163 "trsvcid": "47984" 00:19:42.163 }, 00:19:42.163 "auth": { 00:19:42.163 "state": "completed", 00:19:42.163 "digest": "sha256", 00:19:42.163 "dhgroup": "ffdhe2048" 00:19:42.163 } 00:19:42.163 } 00:19:42.163 ]' 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.163 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.422 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.422 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.422 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.422 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.422 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.680 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:42.680 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.613 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.614 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.614 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.614 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.871 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.129 00:19:44.129 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.129 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.129 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.388 { 00:19:44.388 "cntlid": 17, 00:19:44.388 "qid": 0, 00:19:44.388 "state": "enabled", 00:19:44.388 "thread": "nvmf_tgt_poll_group_000", 00:19:44.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.388 "listen_address": { 00:19:44.388 "trtype": "TCP", 00:19:44.388 "adrfam": "IPv4", 00:19:44.388 "traddr": "10.0.0.2", 00:19:44.388 "trsvcid": "4420" 00:19:44.388 }, 00:19:44.388 "peer_address": { 00:19:44.388 "trtype": "TCP", 00:19:44.388 "adrfam": "IPv4", 00:19:44.388 "traddr": "10.0.0.1", 00:19:44.388 "trsvcid": "48018" 00:19:44.388 }, 00:19:44.388 "auth": { 00:19:44.388 "state": "completed", 00:19:44.388 "digest": "sha256", 00:19:44.388 "dhgroup": "ffdhe3072" 00:19:44.388 } 00:19:44.388 } 00:19:44.388 ]' 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.388 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.646 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.646 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.646 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.646 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.646 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.905 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:44.905 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.839 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.097 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.355 00:19:46.355 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.355 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.355 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.613 { 00:19:46.613 "cntlid": 19, 00:19:46.613 "qid": 0, 00:19:46.613 "state": "enabled", 00:19:46.613 "thread": "nvmf_tgt_poll_group_000", 00:19:46.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.613 "listen_address": { 00:19:46.613 "trtype": "TCP", 00:19:46.613 "adrfam": "IPv4", 00:19:46.613 "traddr": "10.0.0.2", 00:19:46.613 "trsvcid": "4420" 00:19:46.613 }, 00:19:46.613 "peer_address": { 00:19:46.613 "trtype": "TCP", 00:19:46.613 "adrfam": "IPv4", 00:19:46.613 "traddr": "10.0.0.1", 00:19:46.613 "trsvcid": "42566" 00:19:46.613 }, 00:19:46.613 "auth": { 00:19:46.613 "state": "completed", 00:19:46.613 "digest": "sha256", 00:19:46.613 "dhgroup": "ffdhe3072" 00:19:46.613 } 00:19:46.613 } 00:19:46.613 ]' 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.613 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.871 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.871 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.871 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.129 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:47.129 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.064 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.323 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.581 00:19:48.581 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.581 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.581 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.839 { 00:19:48.839 "cntlid": 21, 00:19:48.839 "qid": 0, 00:19:48.839 "state": "enabled", 00:19:48.839 "thread": "nvmf_tgt_poll_group_000", 00:19:48.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.839 "listen_address": { 00:19:48.839 "trtype": "TCP", 00:19:48.839 "adrfam": "IPv4", 00:19:48.839 "traddr": "10.0.0.2", 00:19:48.839 "trsvcid": "4420" 00:19:48.839 }, 00:19:48.839 "peer_address": { 00:19:48.839 "trtype": "TCP", 00:19:48.839 "adrfam": "IPv4", 00:19:48.839 "traddr": "10.0.0.1", 00:19:48.839 "trsvcid": "42596" 00:19:48.839 }, 00:19:48.839 "auth": { 00:19:48.839 "state": "completed", 00:19:48.839 "digest": "sha256", 00:19:48.839 "dhgroup": "ffdhe3072" 00:19:48.839 } 00:19:48.839 } 00:19:48.839 ]' 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.839 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.097 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.097 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.097 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.097 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.097 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.355 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:49.355 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:50.290 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.290 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.549 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.115 00:19:51.115 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.115 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.115 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.373 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.373 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.373 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.373 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.374 { 00:19:51.374 "cntlid": 23, 00:19:51.374 "qid": 0, 00:19:51.374 "state": "enabled", 00:19:51.374 "thread": "nvmf_tgt_poll_group_000", 00:19:51.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.374 "listen_address": { 00:19:51.374 "trtype": "TCP", 00:19:51.374 "adrfam": "IPv4", 00:19:51.374 "traddr": "10.0.0.2", 00:19:51.374 "trsvcid": "4420" 00:19:51.374 }, 00:19:51.374 "peer_address": { 00:19:51.374 "trtype": "TCP", 00:19:51.374 "adrfam": "IPv4", 00:19:51.374 "traddr": "10.0.0.1", 00:19:51.374 "trsvcid": "42622" 00:19:51.374 }, 00:19:51.374 "auth": { 00:19:51.374 "state": "completed", 00:19:51.374 "digest": "sha256", 00:19:51.374 "dhgroup": "ffdhe3072" 00:19:51.374 } 00:19:51.374 } 00:19:51.374 ]' 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.374 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.632 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:51.632 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.565 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.824 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.389 00:19:53.389 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.389 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.389 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.648 { 00:19:53.648 "cntlid": 25, 00:19:53.648 "qid": 0, 00:19:53.648 "state": "enabled", 00:19:53.648 "thread": "nvmf_tgt_poll_group_000", 00:19:53.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.648 "listen_address": { 00:19:53.648 "trtype": "TCP", 00:19:53.648 "adrfam": "IPv4", 00:19:53.648 "traddr": "10.0.0.2", 00:19:53.648 "trsvcid": "4420" 00:19:53.648 }, 00:19:53.648 "peer_address": { 00:19:53.648 "trtype": "TCP", 00:19:53.648 "adrfam": "IPv4", 00:19:53.648 "traddr": "10.0.0.1", 00:19:53.648 "trsvcid": "42634" 00:19:53.648 }, 00:19:53.648 "auth": { 00:19:53.648 "state": "completed", 00:19:53.648 "digest": "sha256", 00:19:53.648 "dhgroup": "ffdhe4096" 00:19:53.648 } 00:19:53.648 } 00:19:53.648 ]' 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.648 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.906 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:53.906 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.841 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.407 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.408 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.408 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.408 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.665 00:19:55.665 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.665 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.665 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.924 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.925 { 00:19:55.925 "cntlid": 27, 00:19:55.925 "qid": 0, 00:19:55.925 "state": "enabled", 00:19:55.925 "thread": "nvmf_tgt_poll_group_000", 00:19:55.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.925 "listen_address": { 00:19:55.925 "trtype": "TCP", 00:19:55.925 "adrfam": "IPv4", 00:19:55.925 "traddr": "10.0.0.2", 00:19:55.925 "trsvcid": "4420" 00:19:55.925 }, 00:19:55.925 "peer_address": { 00:19:55.925 "trtype": "TCP", 00:19:55.925 "adrfam": "IPv4", 00:19:55.925 "traddr": "10.0.0.1", 00:19:55.925 "trsvcid": "42676" 00:19:55.925 }, 00:19:55.925 "auth": { 00:19:55.925 "state": "completed", 00:19:55.925 "digest": "sha256", 00:19:55.925 "dhgroup": "ffdhe4096" 00:19:55.925 } 00:19:55.925 } 00:19:55.925 ]' 00:19:55.925 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.183 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.442 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:56.442 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.376 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.634 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.635 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.201 00:19:58.201 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.201 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.201 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.201 { 00:19:58.201 "cntlid": 29, 00:19:58.201 "qid": 0, 00:19:58.201 "state": "enabled", 00:19:58.201 "thread": "nvmf_tgt_poll_group_000", 00:19:58.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.201 "listen_address": { 00:19:58.201 "trtype": "TCP", 00:19:58.201 "adrfam": "IPv4", 00:19:58.201 "traddr": "10.0.0.2", 00:19:58.201 "trsvcid": "4420" 00:19:58.201 }, 00:19:58.201 "peer_address": { 00:19:58.201 "trtype": "TCP", 00:19:58.201 "adrfam": "IPv4", 00:19:58.201 "traddr": "10.0.0.1", 00:19:58.201 "trsvcid": "36512" 00:19:58.201 }, 00:19:58.201 "auth": { 00:19:58.201 "state": "completed", 00:19:58.201 "digest": "sha256", 00:19:58.201 "dhgroup": "ffdhe4096" 00:19:58.201 } 00:19:58.201 } 00:19:58.201 ]' 00:19:58.201 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.459 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.717 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:58.718 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.651 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.910 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.476 00:20:00.476 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.476 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.476 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.734 { 00:20:00.734 "cntlid": 31, 00:20:00.734 "qid": 0, 00:20:00.734 "state": "enabled", 00:20:00.734 "thread": "nvmf_tgt_poll_group_000", 00:20:00.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.734 "listen_address": { 00:20:00.734 "trtype": "TCP", 00:20:00.734 "adrfam": "IPv4", 00:20:00.734 "traddr": "10.0.0.2", 00:20:00.734 "trsvcid": "4420" 00:20:00.734 }, 00:20:00.734 "peer_address": { 00:20:00.734 "trtype": "TCP", 00:20:00.734 "adrfam": "IPv4", 00:20:00.734 "traddr": "10.0.0.1", 00:20:00.734 "trsvcid": "36540" 00:20:00.734 }, 00:20:00.734 "auth": { 00:20:00.734 "state": "completed", 00:20:00.734 "digest": "sha256", 00:20:00.734 "dhgroup": "ffdhe4096" 00:20:00.734 } 00:20:00.734 } 00:20:00.734 ]' 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.734 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.992 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:00.992 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.926 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.184 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.750 00:20:02.750 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.750 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.750 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.008 { 00:20:03.008 "cntlid": 33, 00:20:03.008 "qid": 0, 00:20:03.008 "state": "enabled", 00:20:03.008 "thread": "nvmf_tgt_poll_group_000", 00:20:03.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.008 "listen_address": { 00:20:03.008 "trtype": "TCP", 00:20:03.008 "adrfam": "IPv4", 00:20:03.008 "traddr": "10.0.0.2", 00:20:03.008 "trsvcid": "4420" 00:20:03.008 }, 00:20:03.008 "peer_address": { 00:20:03.008 "trtype": "TCP", 00:20:03.008 "adrfam": "IPv4", 00:20:03.008 "traddr": "10.0.0.1", 00:20:03.008 "trsvcid": "36570" 00:20:03.008 }, 00:20:03.008 "auth": { 00:20:03.008 "state": "completed", 00:20:03.008 "digest": "sha256", 00:20:03.008 "dhgroup": "ffdhe6144" 00:20:03.008 } 00:20:03.008 } 00:20:03.008 ]' 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.008 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.267 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:03.267 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.201 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.768 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.333 00:20:05.333 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.333 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.333 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.591 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.591 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.591 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.591 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.592 { 00:20:05.592 "cntlid": 35, 00:20:05.592 "qid": 0, 00:20:05.592 "state": "enabled", 00:20:05.592 "thread": "nvmf_tgt_poll_group_000", 00:20:05.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.592 "listen_address": { 00:20:05.592 "trtype": "TCP", 00:20:05.592 "adrfam": "IPv4", 00:20:05.592 "traddr": "10.0.0.2", 00:20:05.592 "trsvcid": "4420" 00:20:05.592 }, 00:20:05.592 "peer_address": { 00:20:05.592 "trtype": "TCP", 00:20:05.592 "adrfam": "IPv4", 00:20:05.592 "traddr": "10.0.0.1", 00:20:05.592 "trsvcid": "36584" 00:20:05.592 }, 00:20:05.592 "auth": { 00:20:05.592 "state": "completed", 00:20:05.592 "digest": "sha256", 00:20:05.592 "dhgroup": "ffdhe6144" 00:20:05.592 } 00:20:05.592 } 00:20:05.592 ]' 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.592 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:05.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.782 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.040 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.605 00:20:07.605 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.605 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.605 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.863 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.863 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.863 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.863 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.863 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.121 { 00:20:08.121 "cntlid": 37, 00:20:08.121 "qid": 0, 00:20:08.121 "state": "enabled", 00:20:08.121 "thread": "nvmf_tgt_poll_group_000", 00:20:08.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.121 "listen_address": { 00:20:08.121 "trtype": "TCP", 00:20:08.121 "adrfam": "IPv4", 00:20:08.121 "traddr": "10.0.0.2", 00:20:08.121 "trsvcid": "4420" 00:20:08.121 }, 00:20:08.121 "peer_address": { 00:20:08.121 "trtype": "TCP", 00:20:08.121 "adrfam": "IPv4", 00:20:08.121 "traddr": "10.0.0.1", 00:20:08.121 "trsvcid": "60102" 00:20:08.121 }, 00:20:08.121 "auth": { 00:20:08.121 "state": "completed", 00:20:08.121 "digest": "sha256", 00:20:08.121 "dhgroup": "ffdhe6144" 00:20:08.121 } 00:20:08.121 } 00:20:08.121 ]' 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.121 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.379 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:08.379 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.314 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.571 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:09.571 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.572 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.137 00:20:10.137 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.137 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.137 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.395 { 00:20:10.395 "cntlid": 39, 00:20:10.395 "qid": 0, 00:20:10.395 "state": "enabled", 00:20:10.395 "thread": "nvmf_tgt_poll_group_000", 00:20:10.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.395 "listen_address": { 00:20:10.395 "trtype": "TCP", 00:20:10.395 "adrfam": "IPv4", 00:20:10.395 "traddr": "10.0.0.2", 00:20:10.395 "trsvcid": "4420" 00:20:10.395 }, 00:20:10.395 "peer_address": { 00:20:10.395 "trtype": "TCP", 00:20:10.395 "adrfam": "IPv4", 00:20:10.395 "traddr": "10.0.0.1", 00:20:10.395 "trsvcid": "60136" 00:20:10.395 }, 00:20:10.395 "auth": { 00:20:10.395 "state": "completed", 00:20:10.395 "digest": "sha256", 00:20:10.395 "dhgroup": "ffdhe6144" 00:20:10.395 } 00:20:10.395 } 00:20:10.395 ]' 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.395 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.653 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.653 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.653 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.653 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.653 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.911 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:10.911 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.843 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.101 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.035 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.035 { 00:20:13.035 "cntlid": 41, 00:20:13.035 "qid": 0, 00:20:13.035 "state": "enabled", 00:20:13.035 "thread": "nvmf_tgt_poll_group_000", 00:20:13.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.035 "listen_address": { 00:20:13.035 "trtype": "TCP", 00:20:13.035 "adrfam": "IPv4", 00:20:13.035 "traddr": "10.0.0.2", 00:20:13.035 "trsvcid": "4420" 00:20:13.035 }, 00:20:13.035 "peer_address": { 00:20:13.035 "trtype": "TCP", 00:20:13.035 "adrfam": "IPv4", 00:20:13.035 "traddr": "10.0.0.1", 00:20:13.035 "trsvcid": "60180" 00:20:13.035 }, 00:20:13.035 "auth": { 00:20:13.035 "state": "completed", 00:20:13.035 "digest": "sha256", 00:20:13.035 "dhgroup": "ffdhe8192" 00:20:13.035 } 00:20:13.035 } 00:20:13.035 ]' 00:20:13.035 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.293 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.551 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:13.551 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.485 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.743 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.678 00:20:15.678 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.678 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.678 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.937 { 00:20:15.937 "cntlid": 43, 00:20:15.937 "qid": 0, 00:20:15.937 "state": "enabled", 00:20:15.937 "thread": "nvmf_tgt_poll_group_000", 00:20:15.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.937 "listen_address": { 00:20:15.937 "trtype": "TCP", 00:20:15.937 "adrfam": "IPv4", 00:20:15.937 "traddr": "10.0.0.2", 00:20:15.937 "trsvcid": "4420" 00:20:15.937 }, 00:20:15.937 "peer_address": { 00:20:15.937 "trtype": "TCP", 00:20:15.937 "adrfam": "IPv4", 00:20:15.937 "traddr": "10.0.0.1", 00:20:15.937 "trsvcid": "60206" 00:20:15.937 }, 00:20:15.937 "auth": { 00:20:15.937 "state": "completed", 00:20:15.937 "digest": "sha256", 00:20:15.937 "dhgroup": "ffdhe8192" 00:20:15.937 } 00:20:15.937 } 00:20:15.937 ]' 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.937 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.195 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:16.195 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:17.128 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.128 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.128 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.128 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.386 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.386 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.386 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.386 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.645 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.579 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.579 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.837 { 00:20:18.837 "cntlid": 45, 00:20:18.837 "qid": 0, 00:20:18.837 "state": "enabled", 00:20:18.837 "thread": "nvmf_tgt_poll_group_000", 00:20:18.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.837 "listen_address": { 00:20:18.837 "trtype": "TCP", 00:20:18.837 "adrfam": "IPv4", 00:20:18.837 "traddr": "10.0.0.2", 00:20:18.837 "trsvcid": "4420" 00:20:18.837 }, 00:20:18.837 "peer_address": { 00:20:18.837 "trtype": "TCP", 00:20:18.837 "adrfam": "IPv4", 00:20:18.837 "traddr": "10.0.0.1", 00:20:18.837 "trsvcid": "56578" 00:20:18.837 }, 00:20:18.837 "auth": { 00:20:18.837 "state": "completed", 00:20:18.837 "digest": "sha256", 00:20:18.837 "dhgroup": "ffdhe8192" 00:20:18.837 } 00:20:18.837 } 00:20:18.837 ]' 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.837 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.095 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:19.095 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.030 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.288 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.222 00:20:21.222 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.222 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.222 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.481 { 00:20:21.481 "cntlid": 47, 00:20:21.481 "qid": 0, 00:20:21.481 "state": "enabled", 00:20:21.481 "thread": "nvmf_tgt_poll_group_000", 00:20:21.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.481 "listen_address": { 00:20:21.481 "trtype": "TCP", 00:20:21.481 "adrfam": "IPv4", 00:20:21.481 "traddr": "10.0.0.2", 00:20:21.481 "trsvcid": "4420" 00:20:21.481 }, 00:20:21.481 "peer_address": { 00:20:21.481 "trtype": "TCP", 00:20:21.481 "adrfam": "IPv4", 00:20:21.481 "traddr": "10.0.0.1", 00:20:21.481 "trsvcid": "56602" 00:20:21.481 }, 00:20:21.481 "auth": { 00:20:21.481 "state": "completed", 00:20:21.481 "digest": "sha256", 00:20:21.481 "dhgroup": "ffdhe8192" 00:20:21.481 } 00:20:21.481 } 00:20:21.481 ]' 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.481 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.739 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:21.739 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.672 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.930 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.188 00:20:23.446 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.446 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.446 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.704 { 00:20:23.704 "cntlid": 49, 00:20:23.704 "qid": 0, 00:20:23.704 "state": "enabled", 00:20:23.704 "thread": "nvmf_tgt_poll_group_000", 00:20:23.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.704 "listen_address": { 00:20:23.704 "trtype": "TCP", 00:20:23.704 "adrfam": "IPv4", 00:20:23.704 "traddr": "10.0.0.2", 00:20:23.704 "trsvcid": "4420" 00:20:23.704 }, 00:20:23.704 "peer_address": { 00:20:23.704 "trtype": "TCP", 00:20:23.704 "adrfam": "IPv4", 00:20:23.704 "traddr": "10.0.0.1", 00:20:23.704 "trsvcid": "56638" 00:20:23.704 }, 00:20:23.704 "auth": { 00:20:23.704 "state": "completed", 00:20:23.704 "digest": "sha384", 00:20:23.704 "dhgroup": "null" 00:20:23.704 } 00:20:23.704 } 00:20:23.704 ]' 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.704 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.962 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:23.962 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.896 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.897 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.897 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.155 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.413 00:20:25.672 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.672 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.672 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.930 { 00:20:25.930 "cntlid": 51, 00:20:25.930 "qid": 0, 00:20:25.930 "state": "enabled", 00:20:25.930 "thread": "nvmf_tgt_poll_group_000", 00:20:25.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.930 "listen_address": { 00:20:25.930 "trtype": "TCP", 00:20:25.930 "adrfam": "IPv4", 00:20:25.930 "traddr": "10.0.0.2", 00:20:25.930 "trsvcid": "4420" 00:20:25.930 }, 00:20:25.930 "peer_address": { 00:20:25.930 "trtype": "TCP", 00:20:25.930 "adrfam": "IPv4", 00:20:25.930 "traddr": "10.0.0.1", 00:20:25.930 "trsvcid": "56658" 00:20:25.930 }, 00:20:25.930 "auth": { 00:20:25.930 "state": "completed", 00:20:25.930 "digest": "sha384", 00:20:25.930 "dhgroup": "null" 00:20:25.930 } 00:20:25.930 } 00:20:25.930 ]' 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.930 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.188 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:26.188 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.122 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.380 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.946 00:20:27.946 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.946 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.946 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.205 { 00:20:28.205 "cntlid": 53, 00:20:28.205 "qid": 0, 00:20:28.205 "state": "enabled", 00:20:28.205 "thread": "nvmf_tgt_poll_group_000", 00:20:28.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.205 "listen_address": { 00:20:28.205 "trtype": "TCP", 00:20:28.205 "adrfam": "IPv4", 00:20:28.205 "traddr": "10.0.0.2", 00:20:28.205 "trsvcid": "4420" 00:20:28.205 }, 00:20:28.205 "peer_address": { 00:20:28.205 "trtype": "TCP", 00:20:28.205 "adrfam": "IPv4", 00:20:28.205 "traddr": "10.0.0.1", 00:20:28.205 "trsvcid": "36374" 00:20:28.205 }, 00:20:28.205 "auth": { 00:20:28.205 "state": "completed", 00:20:28.205 "digest": "sha384", 00:20:28.205 "dhgroup": "null" 00:20:28.205 } 00:20:28.205 } 00:20:28.205 ]' 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.205 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.463 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:28.463 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.397 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.398 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.656 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.914 00:20:29.914 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.914 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.914 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.481 { 00:20:30.481 "cntlid": 55, 00:20:30.481 "qid": 0, 00:20:30.481 "state": "enabled", 00:20:30.481 "thread": "nvmf_tgt_poll_group_000", 00:20:30.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.481 "listen_address": { 00:20:30.481 "trtype": "TCP", 00:20:30.481 "adrfam": "IPv4", 00:20:30.481 "traddr": "10.0.0.2", 00:20:30.481 "trsvcid": "4420" 00:20:30.481 }, 00:20:30.481 "peer_address": { 00:20:30.481 "trtype": "TCP", 00:20:30.481 "adrfam": "IPv4", 00:20:30.481 "traddr": "10.0.0.1", 00:20:30.481 "trsvcid": "36410" 00:20:30.481 }, 00:20:30.481 "auth": { 00:20:30.481 "state": "completed", 00:20:30.481 "digest": "sha384", 00:20:30.481 "dhgroup": "null" 00:20:30.481 } 00:20:30.481 } 00:20:30.481 ]' 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.481 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.739 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:30.739 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.674 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.933 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.191 00:20:32.191 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.191 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.191 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.450 { 00:20:32.450 "cntlid": 57, 00:20:32.450 "qid": 0, 00:20:32.450 "state": "enabled", 00:20:32.450 "thread": "nvmf_tgt_poll_group_000", 00:20:32.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.450 "listen_address": { 00:20:32.450 "trtype": "TCP", 00:20:32.450 "adrfam": "IPv4", 00:20:32.450 "traddr": "10.0.0.2", 00:20:32.450 "trsvcid": "4420" 00:20:32.450 }, 00:20:32.450 "peer_address": { 00:20:32.450 "trtype": "TCP", 00:20:32.450 "adrfam": "IPv4", 00:20:32.450 "traddr": "10.0.0.1", 00:20:32.450 "trsvcid": "36440" 00:20:32.450 }, 00:20:32.450 "auth": { 00:20:32.450 "state": "completed", 00:20:32.450 "digest": "sha384", 00:20:32.450 "dhgroup": "ffdhe2048" 00:20:32.450 } 00:20:32.450 } 00:20:32.450 ]' 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.450 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.708 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.708 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.708 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.708 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.708 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.966 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:32.966 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.900 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.159 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.417 00:20:34.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.684 { 00:20:34.684 "cntlid": 59, 00:20:34.684 "qid": 0, 00:20:34.684 "state": "enabled", 00:20:34.684 "thread": "nvmf_tgt_poll_group_000", 00:20:34.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.684 "listen_address": { 00:20:34.684 "trtype": "TCP", 00:20:34.684 "adrfam": "IPv4", 00:20:34.684 "traddr": "10.0.0.2", 00:20:34.684 "trsvcid": "4420" 00:20:34.684 }, 00:20:34.684 "peer_address": { 00:20:34.684 "trtype": "TCP", 00:20:34.684 "adrfam": "IPv4", 00:20:34.684 "traddr": "10.0.0.1", 00:20:34.684 "trsvcid": "36474" 00:20:34.684 }, 00:20:34.684 "auth": { 00:20:34.684 "state": "completed", 00:20:34.684 "digest": "sha384", 00:20:34.684 "dhgroup": "ffdhe2048" 00:20:34.684 } 00:20:34.684 } 00:20:34.684 ]' 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.684 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.001 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:35.001 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.033 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.302 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.303 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.303 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.303 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.303 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.303 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.596 00:20:36.596 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.596 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.596 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.903 { 00:20:36.903 "cntlid": 61, 00:20:36.903 "qid": 0, 00:20:36.903 "state": "enabled", 00:20:36.903 "thread": "nvmf_tgt_poll_group_000", 00:20:36.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.903 "listen_address": { 00:20:36.903 "trtype": "TCP", 00:20:36.903 "adrfam": "IPv4", 00:20:36.903 "traddr": "10.0.0.2", 00:20:36.903 "trsvcid": "4420" 00:20:36.903 }, 00:20:36.903 "peer_address": { 00:20:36.903 "trtype": "TCP", 00:20:36.903 "adrfam": "IPv4", 00:20:36.903 "traddr": "10.0.0.1", 00:20:36.903 "trsvcid": "39834" 00:20:36.903 }, 00:20:36.903 "auth": { 00:20:36.903 "state": "completed", 00:20:36.903 "digest": "sha384", 00:20:36.903 "dhgroup": "ffdhe2048" 00:20:36.903 } 00:20:36.903 } 00:20:36.903 ]' 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.903 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.190 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:37.190 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.123 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.381 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.946 00:20:38.946 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.946 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.946 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.204 { 00:20:39.204 "cntlid": 63, 00:20:39.204 "qid": 0, 00:20:39.204 "state": "enabled", 00:20:39.204 "thread": "nvmf_tgt_poll_group_000", 00:20:39.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.204 "listen_address": { 00:20:39.204 "trtype": "TCP", 00:20:39.204 "adrfam": "IPv4", 00:20:39.204 "traddr": "10.0.0.2", 00:20:39.204 "trsvcid": "4420" 00:20:39.204 }, 00:20:39.204 "peer_address": { 00:20:39.204 "trtype": "TCP", 00:20:39.204 "adrfam": "IPv4", 00:20:39.204 "traddr": "10.0.0.1", 00:20:39.204 "trsvcid": "39846" 00:20:39.204 }, 00:20:39.204 "auth": { 00:20:39.204 "state": "completed", 00:20:39.204 "digest": "sha384", 00:20:39.204 "dhgroup": "ffdhe2048" 00:20:39.204 } 00:20:39.204 } 00:20:39.204 ]' 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.204 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.462 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:39.462 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:40.394 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.395 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.652 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.653 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.218 00:20:41.218 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.218 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.218 13:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.476 { 00:20:41.476 "cntlid": 65, 00:20:41.476 "qid": 0, 00:20:41.476 "state": "enabled", 00:20:41.476 "thread": "nvmf_tgt_poll_group_000", 00:20:41.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.476 "listen_address": { 00:20:41.476 "trtype": "TCP", 00:20:41.476 "adrfam": "IPv4", 00:20:41.476 "traddr": "10.0.0.2", 00:20:41.476 "trsvcid": "4420" 00:20:41.476 }, 00:20:41.476 "peer_address": { 00:20:41.476 "trtype": "TCP", 00:20:41.476 "adrfam": "IPv4", 00:20:41.476 "traddr": "10.0.0.1", 00:20:41.476 "trsvcid": "39864" 00:20:41.476 }, 00:20:41.476 "auth": { 00:20:41.476 "state": "completed", 00:20:41.476 "digest": "sha384", 00:20:41.476 "dhgroup": "ffdhe3072" 00:20:41.476 } 00:20:41.476 } 00:20:41.476 ]' 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.476 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.734 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:41.734 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.712 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.970 13:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.535 00:20:43.535 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.535 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.535 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.792 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.792 { 00:20:43.792 "cntlid": 67, 00:20:43.792 "qid": 0, 00:20:43.792 "state": "enabled", 00:20:43.792 "thread": "nvmf_tgt_poll_group_000", 00:20:43.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.792 "listen_address": { 00:20:43.792 "trtype": "TCP", 00:20:43.793 "adrfam": "IPv4", 00:20:43.793 "traddr": "10.0.0.2", 00:20:43.793 "trsvcid": "4420" 00:20:43.793 }, 00:20:43.793 "peer_address": { 00:20:43.793 "trtype": "TCP", 00:20:43.793 "adrfam": "IPv4", 00:20:43.793 "traddr": "10.0.0.1", 00:20:43.793 "trsvcid": "39886" 00:20:43.793 }, 00:20:43.793 "auth": { 00:20:43.793 "state": "completed", 00:20:43.793 "digest": "sha384", 00:20:43.793 "dhgroup": "ffdhe3072" 00:20:43.793 } 00:20:43.793 } 00:20:43.793 ]' 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.793 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.051 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:44.051 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.986 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.244 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.810 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.810 { 00:20:45.810 "cntlid": 69, 00:20:45.810 "qid": 0, 00:20:45.810 "state": "enabled", 00:20:45.810 "thread": "nvmf_tgt_poll_group_000", 00:20:45.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.810 "listen_address": { 00:20:45.810 "trtype": "TCP", 00:20:45.810 "adrfam": "IPv4", 00:20:45.810 "traddr": "10.0.0.2", 00:20:45.810 "trsvcid": "4420" 00:20:45.810 }, 00:20:45.810 "peer_address": { 00:20:45.810 "trtype": "TCP", 00:20:45.810 "adrfam": "IPv4", 00:20:45.810 "traddr": "10.0.0.1", 00:20:45.810 "trsvcid": "39910" 00:20:45.810 }, 00:20:45.810 "auth": { 00:20:45.810 "state": "completed", 00:20:45.810 "digest": "sha384", 00:20:45.810 "dhgroup": "ffdhe3072" 00:20:45.810 } 00:20:45.810 } 00:20:45.810 ]' 00:20:45.810 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.068 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.068 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.068 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.069 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.069 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.069 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.069 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.331 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:46.331 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.276 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.534 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.791 00:20:47.791 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.791 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.791 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.049 { 00:20:48.049 "cntlid": 71, 00:20:48.049 "qid": 0, 00:20:48.049 "state": "enabled", 00:20:48.049 "thread": "nvmf_tgt_poll_group_000", 00:20:48.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.049 "listen_address": { 00:20:48.049 "trtype": "TCP", 00:20:48.049 "adrfam": "IPv4", 00:20:48.049 "traddr": "10.0.0.2", 00:20:48.049 "trsvcid": "4420" 00:20:48.049 }, 00:20:48.049 "peer_address": { 00:20:48.049 "trtype": "TCP", 00:20:48.049 "adrfam": "IPv4", 00:20:48.049 "traddr": "10.0.0.1", 00:20:48.049 "trsvcid": "38034" 00:20:48.049 }, 00:20:48.049 "auth": { 00:20:48.049 "state": "completed", 00:20:48.049 "digest": "sha384", 00:20:48.049 "dhgroup": "ffdhe3072" 00:20:48.049 } 00:20:48.049 } 00:20:48.049 ]' 00:20:48.049 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.307 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.307 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.307 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.307 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.307 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.307 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.307 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.565 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:48.565 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.500 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.760 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.326 00:20:50.326 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.326 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.326 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.326 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.326 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.326 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.326 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.584 { 00:20:50.584 "cntlid": 73, 00:20:50.584 "qid": 0, 00:20:50.584 "state": "enabled", 00:20:50.584 "thread": "nvmf_tgt_poll_group_000", 00:20:50.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.584 "listen_address": { 00:20:50.584 "trtype": "TCP", 00:20:50.584 "adrfam": "IPv4", 00:20:50.584 "traddr": "10.0.0.2", 00:20:50.584 "trsvcid": "4420" 00:20:50.584 }, 00:20:50.584 "peer_address": { 00:20:50.584 "trtype": "TCP", 00:20:50.584 "adrfam": "IPv4", 00:20:50.584 "traddr": "10.0.0.1", 00:20:50.584 "trsvcid": "38058" 00:20:50.584 }, 00:20:50.584 "auth": { 00:20:50.584 "state": "completed", 00:20:50.584 "digest": "sha384", 00:20:50.584 "dhgroup": "ffdhe4096" 00:20:50.584 } 00:20:50.584 } 00:20:50.584 ]' 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.584 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.842 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:50.842 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.774 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.032 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.033 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.598 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.598 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.857 { 00:20:52.857 "cntlid": 75, 00:20:52.857 "qid": 0, 00:20:52.857 "state": "enabled", 00:20:52.857 "thread": "nvmf_tgt_poll_group_000", 00:20:52.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.857 "listen_address": { 00:20:52.857 "trtype": "TCP", 00:20:52.857 "adrfam": "IPv4", 00:20:52.857 "traddr": "10.0.0.2", 00:20:52.857 "trsvcid": "4420" 00:20:52.857 }, 00:20:52.857 "peer_address": { 00:20:52.857 "trtype": "TCP", 00:20:52.857 "adrfam": "IPv4", 00:20:52.857 "traddr": "10.0.0.1", 00:20:52.857 "trsvcid": "38082" 00:20:52.857 }, 00:20:52.857 "auth": { 00:20:52.857 "state": "completed", 00:20:52.857 "digest": "sha384", 00:20:52.857 "dhgroup": "ffdhe4096" 00:20:52.857 } 00:20:52.857 } 00:20:52.857 ]' 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.857 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.115 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:53.115 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.049 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.306 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.307 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.873 00:20:54.873 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.873 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.873 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.131 { 00:20:55.131 "cntlid": 77, 00:20:55.131 "qid": 0, 00:20:55.131 "state": "enabled", 00:20:55.131 "thread": "nvmf_tgt_poll_group_000", 00:20:55.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.131 "listen_address": { 00:20:55.131 "trtype": "TCP", 00:20:55.131 "adrfam": "IPv4", 00:20:55.131 "traddr": "10.0.0.2", 00:20:55.131 "trsvcid": "4420" 00:20:55.131 }, 00:20:55.131 "peer_address": { 00:20:55.131 "trtype": "TCP", 00:20:55.131 "adrfam": "IPv4", 00:20:55.131 "traddr": "10.0.0.1", 00:20:55.131 "trsvcid": "38092" 00:20:55.131 }, 00:20:55.131 "auth": { 00:20:55.131 "state": "completed", 00:20:55.131 "digest": "sha384", 00:20:55.131 "dhgroup": "ffdhe4096" 00:20:55.131 } 00:20:55.131 } 00:20:55.131 ]' 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.131 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.389 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:55.389 13:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:20:56.321 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.578 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.837 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.094 00:20:57.094 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.094 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.094 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.352 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.352 { 00:20:57.352 "cntlid": 79, 00:20:57.352 "qid": 0, 00:20:57.352 "state": "enabled", 00:20:57.352 "thread": "nvmf_tgt_poll_group_000", 00:20:57.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.352 "listen_address": { 00:20:57.352 "trtype": "TCP", 00:20:57.352 "adrfam": "IPv4", 00:20:57.352 "traddr": "10.0.0.2", 00:20:57.352 "trsvcid": "4420" 00:20:57.352 }, 00:20:57.352 "peer_address": { 00:20:57.352 "trtype": "TCP", 00:20:57.352 "adrfam": "IPv4", 00:20:57.352 "traddr": "10.0.0.1", 00:20:57.352 "trsvcid": "43560" 00:20:57.353 }, 00:20:57.353 "auth": { 00:20:57.353 "state": "completed", 00:20:57.353 "digest": "sha384", 00:20:57.353 "dhgroup": "ffdhe4096" 00:20:57.353 } 00:20:57.353 } 00:20:57.353 ]' 00:20:57.353 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.610 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.611 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.869 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:57.869 13:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:20:58.804 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.805 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.062 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.628 00:20:59.628 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.628 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.628 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.886 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.886 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.886 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.886 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.144 { 00:21:00.144 "cntlid": 81, 00:21:00.144 "qid": 0, 00:21:00.144 "state": "enabled", 00:21:00.144 "thread": "nvmf_tgt_poll_group_000", 00:21:00.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.144 "listen_address": { 00:21:00.144 "trtype": "TCP", 00:21:00.144 "adrfam": "IPv4", 00:21:00.144 "traddr": "10.0.0.2", 00:21:00.144 "trsvcid": "4420" 00:21:00.144 }, 00:21:00.144 "peer_address": { 00:21:00.144 "trtype": "TCP", 00:21:00.144 "adrfam": "IPv4", 00:21:00.144 "traddr": "10.0.0.1", 00:21:00.144 "trsvcid": "43584" 00:21:00.144 }, 00:21:00.144 "auth": { 00:21:00.144 "state": "completed", 00:21:00.144 "digest": "sha384", 00:21:00.144 "dhgroup": "ffdhe6144" 00:21:00.144 } 00:21:00.144 } 00:21:00.144 ]' 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.144 13:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.402 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:00.402 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.336 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.594 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.159 00:21:02.159 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.159 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.159 13:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.417 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.418 { 00:21:02.418 "cntlid": 83, 00:21:02.418 "qid": 0, 00:21:02.418 "state": "enabled", 00:21:02.418 "thread": "nvmf_tgt_poll_group_000", 00:21:02.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.418 "listen_address": { 00:21:02.418 "trtype": "TCP", 00:21:02.418 "adrfam": "IPv4", 00:21:02.418 "traddr": "10.0.0.2", 00:21:02.418 "trsvcid": "4420" 00:21:02.418 }, 00:21:02.418 "peer_address": { 00:21:02.418 "trtype": "TCP", 00:21:02.418 "adrfam": "IPv4", 00:21:02.418 "traddr": "10.0.0.1", 00:21:02.418 "trsvcid": "43592" 00:21:02.418 }, 00:21:02.418 "auth": { 00:21:02.418 "state": "completed", 00:21:02.418 "digest": "sha384", 00:21:02.418 "dhgroup": "ffdhe6144" 00:21:02.418 } 00:21:02.418 } 00:21:02.418 ]' 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.418 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.676 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.676 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.676 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.934 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:02.934 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.867 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.433 00:21:04.433 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.433 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.433 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.691 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.691 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.691 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.691 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.949 { 00:21:04.949 "cntlid": 85, 00:21:04.949 "qid": 0, 00:21:04.949 "state": "enabled", 00:21:04.949 "thread": "nvmf_tgt_poll_group_000", 00:21:04.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.949 "listen_address": { 00:21:04.949 "trtype": "TCP", 00:21:04.949 "adrfam": "IPv4", 00:21:04.949 "traddr": "10.0.0.2", 00:21:04.949 "trsvcid": "4420" 00:21:04.949 }, 00:21:04.949 "peer_address": { 00:21:04.949 "trtype": "TCP", 00:21:04.949 "adrfam": "IPv4", 00:21:04.949 "traddr": "10.0.0.1", 00:21:04.949 "trsvcid": "43618" 00:21:04.949 }, 00:21:04.949 "auth": { 00:21:04.949 "state": "completed", 00:21:04.949 "digest": "sha384", 00:21:04.949 "dhgroup": "ffdhe6144" 00:21:04.949 } 00:21:04.949 } 00:21:04.949 ]' 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.949 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.215 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:05.215 13:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.148 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.406 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.970 00:21:06.970 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.970 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.970 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.228 { 00:21:07.228 "cntlid": 87, 00:21:07.228 "qid": 0, 00:21:07.228 "state": "enabled", 00:21:07.228 "thread": "nvmf_tgt_poll_group_000", 00:21:07.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.228 "listen_address": { 00:21:07.228 "trtype": "TCP", 00:21:07.228 "adrfam": "IPv4", 00:21:07.228 "traddr": "10.0.0.2", 00:21:07.228 "trsvcid": "4420" 00:21:07.228 }, 00:21:07.228 "peer_address": { 00:21:07.228 "trtype": "TCP", 00:21:07.228 "adrfam": "IPv4", 00:21:07.228 "traddr": "10.0.0.1", 00:21:07.228 "trsvcid": "41466" 00:21:07.228 }, 00:21:07.228 "auth": { 00:21:07.228 "state": "completed", 00:21:07.228 "digest": "sha384", 00:21:07.228 "dhgroup": "ffdhe6144" 00:21:07.228 } 00:21:07.228 } 00:21:07.228 ]' 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.228 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.228 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.228 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.228 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.228 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.228 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.793 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:07.793 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:08.357 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.357 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.357 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.357 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.613 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.613 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.613 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.613 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.613 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.889 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.821 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.821 { 00:21:09.821 "cntlid": 89, 00:21:09.821 "qid": 0, 00:21:09.821 "state": "enabled", 00:21:09.821 "thread": "nvmf_tgt_poll_group_000", 00:21:09.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.821 "listen_address": { 00:21:09.821 "trtype": "TCP", 00:21:09.821 "adrfam": "IPv4", 00:21:09.821 "traddr": "10.0.0.2", 00:21:09.821 "trsvcid": "4420" 00:21:09.821 }, 00:21:09.821 "peer_address": { 00:21:09.821 "trtype": "TCP", 00:21:09.821 "adrfam": "IPv4", 00:21:09.821 "traddr": "10.0.0.1", 00:21:09.821 "trsvcid": "41492" 00:21:09.821 }, 00:21:09.821 "auth": { 00:21:09.821 "state": "completed", 00:21:09.821 "digest": "sha384", 00:21:09.821 "dhgroup": "ffdhe8192" 00:21:09.821 } 00:21:09.821 } 00:21:09.821 ]' 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.821 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.079 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.079 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.079 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.079 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.079 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.337 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:10.337 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.271 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.529 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.530 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.463 00:21:12.463 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.463 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.463 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.720 { 00:21:12.720 "cntlid": 91, 00:21:12.720 "qid": 0, 00:21:12.720 "state": "enabled", 00:21:12.720 "thread": "nvmf_tgt_poll_group_000", 00:21:12.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.720 "listen_address": { 00:21:12.720 "trtype": "TCP", 00:21:12.720 "adrfam": "IPv4", 00:21:12.720 "traddr": "10.0.0.2", 00:21:12.720 "trsvcid": "4420" 00:21:12.720 }, 00:21:12.720 "peer_address": { 00:21:12.720 "trtype": "TCP", 00:21:12.720 "adrfam": "IPv4", 00:21:12.720 "traddr": "10.0.0.1", 00:21:12.720 "trsvcid": "41508" 00:21:12.720 }, 00:21:12.720 "auth": { 00:21:12.720 "state": "completed", 00:21:12.720 "digest": "sha384", 00:21:12.720 "dhgroup": "ffdhe8192" 00:21:12.720 } 00:21:12.720 } 00:21:12.720 ]' 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.720 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.978 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:12.978 13:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.909 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.166 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.167 13:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.098 00:21:15.098 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.098 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.098 13:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.356 { 00:21:15.356 "cntlid": 93, 00:21:15.356 "qid": 0, 00:21:15.356 "state": "enabled", 00:21:15.356 "thread": "nvmf_tgt_poll_group_000", 00:21:15.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.356 "listen_address": { 00:21:15.356 "trtype": "TCP", 00:21:15.356 "adrfam": "IPv4", 00:21:15.356 "traddr": "10.0.0.2", 00:21:15.356 "trsvcid": "4420" 00:21:15.356 }, 00:21:15.356 "peer_address": { 00:21:15.356 "trtype": "TCP", 00:21:15.356 "adrfam": "IPv4", 00:21:15.356 "traddr": "10.0.0.1", 00:21:15.356 "trsvcid": "41538" 00:21:15.356 }, 00:21:15.356 "auth": { 00:21:15.356 "state": "completed", 00:21:15.356 "digest": "sha384", 00:21:15.356 "dhgroup": "ffdhe8192" 00:21:15.356 } 00:21:15.356 } 00:21:15.356 ]' 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.356 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.613 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:15.613 13:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.548 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.806 13:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.739 00:21:17.739 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.739 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.739 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.998 { 00:21:17.998 "cntlid": 95, 00:21:17.998 "qid": 0, 00:21:17.998 "state": "enabled", 00:21:17.998 "thread": "nvmf_tgt_poll_group_000", 00:21:17.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.998 "listen_address": { 00:21:17.998 "trtype": "TCP", 00:21:17.998 "adrfam": "IPv4", 00:21:17.998 "traddr": "10.0.0.2", 00:21:17.998 "trsvcid": "4420" 00:21:17.998 }, 00:21:17.998 "peer_address": { 00:21:17.998 "trtype": "TCP", 00:21:17.998 "adrfam": "IPv4", 00:21:17.998 "traddr": "10.0.0.1", 00:21:17.998 "trsvcid": "36536" 00:21:17.998 }, 00:21:17.998 "auth": { 00:21:17.998 "state": "completed", 00:21:17.998 "digest": "sha384", 00:21:17.998 "dhgroup": "ffdhe8192" 00:21:17.998 } 00:21:17.998 } 00:21:17.998 ]' 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.998 13:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.257 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:18.257 13:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.191 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.449 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.015 00:21:20.015 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.015 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.015 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.273 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.273 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.274 { 00:21:20.274 "cntlid": 97, 00:21:20.274 "qid": 0, 00:21:20.274 "state": "enabled", 00:21:20.274 "thread": "nvmf_tgt_poll_group_000", 00:21:20.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.274 "listen_address": { 00:21:20.274 "trtype": "TCP", 00:21:20.274 "adrfam": "IPv4", 00:21:20.274 "traddr": "10.0.0.2", 00:21:20.274 "trsvcid": "4420" 00:21:20.274 }, 00:21:20.274 "peer_address": { 00:21:20.274 "trtype": "TCP", 00:21:20.274 "adrfam": "IPv4", 00:21:20.274 "traddr": "10.0.0.1", 00:21:20.274 "trsvcid": "36560" 00:21:20.274 }, 00:21:20.274 "auth": { 00:21:20.274 "state": "completed", 00:21:20.274 "digest": "sha512", 00:21:20.274 "dhgroup": "null" 00:21:20.274 } 00:21:20.274 } 00:21:20.274 ]' 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:20.274 13:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.274 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.274 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.274 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.531 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:20.531 13:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.466 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.723 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.982 00:21:21.982 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.982 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.982 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.239 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.240 { 00:21:22.240 "cntlid": 99, 00:21:22.240 "qid": 0, 00:21:22.240 "state": "enabled", 00:21:22.240 "thread": "nvmf_tgt_poll_group_000", 00:21:22.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.240 "listen_address": { 00:21:22.240 "trtype": "TCP", 00:21:22.240 "adrfam": "IPv4", 00:21:22.240 "traddr": "10.0.0.2", 00:21:22.240 "trsvcid": "4420" 00:21:22.240 }, 00:21:22.240 "peer_address": { 00:21:22.240 "trtype": "TCP", 00:21:22.240 "adrfam": "IPv4", 00:21:22.240 "traddr": "10.0.0.1", 00:21:22.240 "trsvcid": "36588" 00:21:22.240 }, 00:21:22.240 "auth": { 00:21:22.240 "state": "completed", 00:21:22.240 "digest": "sha512", 00:21:22.240 "dhgroup": "null" 00:21:22.240 } 00:21:22.240 } 00:21:22.240 ]' 00:21:22.240 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.497 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.755 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:22.755 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.689 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.947 13:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.513 00:21:24.513 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.513 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.513 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.771 { 00:21:24.771 "cntlid": 101, 00:21:24.771 "qid": 0, 00:21:24.771 "state": "enabled", 00:21:24.771 "thread": "nvmf_tgt_poll_group_000", 00:21:24.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.771 "listen_address": { 00:21:24.771 "trtype": "TCP", 00:21:24.771 "adrfam": "IPv4", 00:21:24.771 "traddr": "10.0.0.2", 00:21:24.771 "trsvcid": "4420" 00:21:24.771 }, 00:21:24.771 "peer_address": { 00:21:24.771 "trtype": "TCP", 00:21:24.771 "adrfam": "IPv4", 00:21:24.771 "traddr": "10.0.0.1", 00:21:24.771 "trsvcid": "36616" 00:21:24.771 }, 00:21:24.771 "auth": { 00:21:24.771 "state": "completed", 00:21:24.771 "digest": "sha512", 00:21:24.771 "dhgroup": "null" 00:21:24.771 } 00:21:24.771 } 00:21:24.771 ]' 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.771 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.029 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:25.030 13:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.964 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.222 13:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.222 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.222 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.222 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.222 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.480 00:21:26.480 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.480 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.480 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.046 { 00:21:27.046 "cntlid": 103, 00:21:27.046 "qid": 0, 00:21:27.046 "state": "enabled", 00:21:27.046 "thread": "nvmf_tgt_poll_group_000", 00:21:27.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.046 "listen_address": { 00:21:27.046 "trtype": "TCP", 00:21:27.046 "adrfam": "IPv4", 00:21:27.046 "traddr": "10.0.0.2", 00:21:27.046 "trsvcid": "4420" 00:21:27.046 }, 00:21:27.046 "peer_address": { 00:21:27.046 "trtype": "TCP", 00:21:27.046 "adrfam": "IPv4", 00:21:27.046 "traddr": "10.0.0.1", 00:21:27.046 "trsvcid": "38904" 00:21:27.046 }, 00:21:27.046 "auth": { 00:21:27.046 "state": "completed", 00:21:27.046 "digest": "sha512", 00:21:27.046 "dhgroup": "null" 00:21:27.046 } 00:21:27.046 } 00:21:27.046 ]' 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.046 13:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.303 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:27.303 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.237 13:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.495 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.753 00:21:28.753 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.753 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.753 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.010 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.010 { 00:21:29.010 "cntlid": 105, 00:21:29.010 "qid": 0, 00:21:29.010 "state": "enabled", 00:21:29.010 "thread": "nvmf_tgt_poll_group_000", 00:21:29.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.010 "listen_address": { 00:21:29.010 "trtype": "TCP", 00:21:29.010 "adrfam": "IPv4", 00:21:29.010 "traddr": "10.0.0.2", 00:21:29.010 "trsvcid": "4420" 00:21:29.010 }, 00:21:29.010 "peer_address": { 00:21:29.010 "trtype": "TCP", 00:21:29.010 "adrfam": "IPv4", 00:21:29.010 "traddr": "10.0.0.1", 00:21:29.010 "trsvcid": "38938" 00:21:29.010 }, 00:21:29.010 "auth": { 00:21:29.010 "state": "completed", 00:21:29.010 "digest": "sha512", 00:21:29.010 "dhgroup": "ffdhe2048" 00:21:29.011 } 00:21:29.011 } 00:21:29.011 ]' 00:21:29.011 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.268 13:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.526 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:29.526 13:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.458 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.715 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.972 00:21:31.229 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.229 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.229 13:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.487 { 00:21:31.487 "cntlid": 107, 00:21:31.487 "qid": 0, 00:21:31.487 "state": "enabled", 00:21:31.487 "thread": "nvmf_tgt_poll_group_000", 00:21:31.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.487 "listen_address": { 00:21:31.487 "trtype": "TCP", 00:21:31.487 "adrfam": "IPv4", 00:21:31.487 "traddr": "10.0.0.2", 00:21:31.487 "trsvcid": "4420" 00:21:31.487 }, 00:21:31.487 "peer_address": { 00:21:31.487 "trtype": "TCP", 00:21:31.487 "adrfam": "IPv4", 00:21:31.487 "traddr": "10.0.0.1", 00:21:31.487 "trsvcid": "38946" 00:21:31.487 }, 00:21:31.487 "auth": { 00:21:31.487 "state": "completed", 00:21:31.487 "digest": "sha512", 00:21:31.487 "dhgroup": "ffdhe2048" 00:21:31.487 } 00:21:31.487 } 00:21:31.487 ]' 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.487 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.744 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:31.744 13:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.677 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.934 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.499 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.499 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.756 { 00:21:33.756 "cntlid": 109, 00:21:33.756 "qid": 0, 00:21:33.756 "state": "enabled", 00:21:33.756 "thread": "nvmf_tgt_poll_group_000", 00:21:33.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.756 "listen_address": { 00:21:33.756 "trtype": "TCP", 00:21:33.756 "adrfam": "IPv4", 00:21:33.756 "traddr": "10.0.0.2", 00:21:33.756 "trsvcid": "4420" 00:21:33.756 }, 00:21:33.756 "peer_address": { 00:21:33.756 "trtype": "TCP", 00:21:33.756 "adrfam": "IPv4", 00:21:33.756 "traddr": "10.0.0.1", 00:21:33.756 "trsvcid": "38978" 00:21:33.756 }, 00:21:33.756 "auth": { 00:21:33.756 "state": "completed", 00:21:33.756 "digest": "sha512", 00:21:33.756 "dhgroup": "ffdhe2048" 00:21:33.756 } 00:21:33.756 } 00:21:33.756 ]' 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.756 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.013 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:34.013 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.945 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.203 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.459 00:21:35.459 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.459 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.459 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.716 { 00:21:35.716 "cntlid": 111, 00:21:35.716 "qid": 0, 00:21:35.716 "state": "enabled", 00:21:35.716 "thread": "nvmf_tgt_poll_group_000", 00:21:35.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.716 "listen_address": { 00:21:35.716 "trtype": "TCP", 00:21:35.716 "adrfam": "IPv4", 00:21:35.716 "traddr": "10.0.0.2", 00:21:35.716 "trsvcid": "4420" 00:21:35.716 }, 00:21:35.716 "peer_address": { 00:21:35.716 "trtype": "TCP", 00:21:35.716 "adrfam": "IPv4", 00:21:35.716 "traddr": "10.0.0.1", 00:21:35.716 "trsvcid": "38998" 00:21:35.716 }, 00:21:35.716 "auth": { 00:21:35.716 "state": "completed", 00:21:35.716 "digest": "sha512", 00:21:35.716 "dhgroup": "ffdhe2048" 00:21:35.716 } 00:21:35.716 } 00:21:35.716 ]' 00:21:35.716 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.973 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.230 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:36.230 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.160 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.417 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.675 00:21:37.675 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.675 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.675 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.932 { 00:21:37.932 "cntlid": 113, 00:21:37.932 "qid": 0, 00:21:37.932 "state": "enabled", 00:21:37.932 "thread": "nvmf_tgt_poll_group_000", 00:21:37.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.932 "listen_address": { 00:21:37.932 "trtype": "TCP", 00:21:37.932 "adrfam": "IPv4", 00:21:37.932 "traddr": "10.0.0.2", 00:21:37.932 "trsvcid": "4420" 00:21:37.932 }, 00:21:37.932 "peer_address": { 00:21:37.932 "trtype": "TCP", 00:21:37.932 "adrfam": "IPv4", 00:21:37.932 "traddr": "10.0.0.1", 00:21:37.932 "trsvcid": "54004" 00:21:37.932 }, 00:21:37.932 "auth": { 00:21:37.932 "state": "completed", 00:21:37.932 "digest": "sha512", 00:21:37.932 "dhgroup": "ffdhe3072" 00:21:37.932 } 00:21:37.932 } 00:21:37.932 ]' 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.932 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.189 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.447 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:38.447 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.381 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.639 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.897 00:21:39.897 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.897 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.897 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.155 { 00:21:40.155 "cntlid": 115, 00:21:40.155 "qid": 0, 00:21:40.155 "state": "enabled", 00:21:40.155 "thread": "nvmf_tgt_poll_group_000", 00:21:40.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.155 "listen_address": { 00:21:40.155 "trtype": "TCP", 00:21:40.155 "adrfam": "IPv4", 00:21:40.155 "traddr": "10.0.0.2", 00:21:40.155 "trsvcid": "4420" 00:21:40.155 }, 00:21:40.155 "peer_address": { 00:21:40.155 "trtype": "TCP", 00:21:40.155 "adrfam": "IPv4", 00:21:40.155 "traddr": "10.0.0.1", 00:21:40.155 "trsvcid": "54030" 00:21:40.155 }, 00:21:40.155 "auth": { 00:21:40.155 "state": "completed", 00:21:40.155 "digest": "sha512", 00:21:40.155 "dhgroup": "ffdhe3072" 00:21:40.155 } 00:21:40.155 } 00:21:40.155 ]' 00:21:40.155 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.413 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.671 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:40.671 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.605 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.864 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.122 00:21:42.122 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.122 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.122 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.380 { 00:21:42.380 "cntlid": 117, 00:21:42.380 "qid": 0, 00:21:42.380 "state": "enabled", 00:21:42.380 "thread": "nvmf_tgt_poll_group_000", 00:21:42.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.380 "listen_address": { 00:21:42.380 "trtype": "TCP", 00:21:42.380 "adrfam": "IPv4", 00:21:42.380 "traddr": "10.0.0.2", 00:21:42.380 "trsvcid": "4420" 00:21:42.380 }, 00:21:42.380 "peer_address": { 00:21:42.380 "trtype": "TCP", 00:21:42.380 "adrfam": "IPv4", 00:21:42.380 "traddr": "10.0.0.1", 00:21:42.380 "trsvcid": "54048" 00:21:42.380 }, 00:21:42.380 "auth": { 00:21:42.380 "state": "completed", 00:21:42.380 "digest": "sha512", 00:21:42.380 "dhgroup": "ffdhe3072" 00:21:42.380 } 00:21:42.380 } 00:21:42.380 ]' 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.380 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.637 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.637 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.637 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.896 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:42.896 13:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.828 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.087 13:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.345 00:21:44.345 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.345 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.345 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.603 { 00:21:44.603 "cntlid": 119, 00:21:44.603 "qid": 0, 00:21:44.603 "state": "enabled", 00:21:44.603 "thread": "nvmf_tgt_poll_group_000", 00:21:44.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.603 "listen_address": { 00:21:44.603 "trtype": "TCP", 00:21:44.603 "adrfam": "IPv4", 00:21:44.603 "traddr": "10.0.0.2", 00:21:44.603 "trsvcid": "4420" 00:21:44.603 }, 00:21:44.603 "peer_address": { 00:21:44.603 "trtype": "TCP", 00:21:44.603 "adrfam": "IPv4", 00:21:44.603 "traddr": "10.0.0.1", 00:21:44.603 "trsvcid": "54076" 00:21:44.603 }, 00:21:44.603 "auth": { 00:21:44.603 "state": "completed", 00:21:44.603 "digest": "sha512", 00:21:44.603 "dhgroup": "ffdhe3072" 00:21:44.603 } 00:21:44.603 } 00:21:44.603 ]' 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.603 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.861 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.861 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.861 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.861 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.861 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.119 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:45.119 13:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.053 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.310 13:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.567 00:21:46.568 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.568 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.568 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.825 { 00:21:46.825 "cntlid": 121, 00:21:46.825 "qid": 0, 00:21:46.825 "state": "enabled", 00:21:46.825 "thread": "nvmf_tgt_poll_group_000", 00:21:46.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.825 "listen_address": { 00:21:46.825 "trtype": "TCP", 00:21:46.825 "adrfam": "IPv4", 00:21:46.825 "traddr": "10.0.0.2", 00:21:46.825 "trsvcid": "4420" 00:21:46.825 }, 00:21:46.825 "peer_address": { 00:21:46.825 "trtype": "TCP", 00:21:46.825 "adrfam": "IPv4", 00:21:46.825 "traddr": "10.0.0.1", 00:21:46.825 "trsvcid": "51232" 00:21:46.825 }, 00:21:46.825 "auth": { 00:21:46.825 "state": "completed", 00:21:46.825 "digest": "sha512", 00:21:46.825 "dhgroup": "ffdhe4096" 00:21:46.825 } 00:21:46.825 } 00:21:46.825 ]' 00:21:46.825 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.083 13:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.341 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:47.341 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.271 13:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.528 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.529 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.786 00:21:48.786 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.786 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.786 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.044 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.044 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.044 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.044 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.301 { 00:21:49.301 "cntlid": 123, 00:21:49.301 "qid": 0, 00:21:49.301 "state": "enabled", 00:21:49.301 "thread": "nvmf_tgt_poll_group_000", 00:21:49.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.301 "listen_address": { 00:21:49.301 "trtype": "TCP", 00:21:49.301 "adrfam": "IPv4", 00:21:49.301 "traddr": "10.0.0.2", 00:21:49.301 "trsvcid": "4420" 00:21:49.301 }, 00:21:49.301 "peer_address": { 00:21:49.301 "trtype": "TCP", 00:21:49.301 "adrfam": "IPv4", 00:21:49.301 "traddr": "10.0.0.1", 00:21:49.301 "trsvcid": "51240" 00:21:49.301 }, 00:21:49.301 "auth": { 00:21:49.301 "state": "completed", 00:21:49.301 "digest": "sha512", 00:21:49.301 "dhgroup": "ffdhe4096" 00:21:49.301 } 00:21:49.301 } 00:21:49.301 ]' 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.301 13:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.301 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.301 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.301 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.560 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:49.560 13:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.491 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.749 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.750 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.317 00:21:51.317 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.317 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.317 13:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.317 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.317 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.317 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.317 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.575 { 00:21:51.575 "cntlid": 125, 00:21:51.575 "qid": 0, 00:21:51.575 "state": "enabled", 00:21:51.575 "thread": "nvmf_tgt_poll_group_000", 00:21:51.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.575 "listen_address": { 00:21:51.575 "trtype": "TCP", 00:21:51.575 "adrfam": "IPv4", 00:21:51.575 "traddr": "10.0.0.2", 00:21:51.575 "trsvcid": "4420" 00:21:51.575 }, 00:21:51.575 "peer_address": { 00:21:51.575 "trtype": "TCP", 00:21:51.575 "adrfam": "IPv4", 00:21:51.575 "traddr": "10.0.0.1", 00:21:51.575 "trsvcid": "51258" 00:21:51.575 }, 00:21:51.575 "auth": { 00:21:51.575 "state": "completed", 00:21:51.575 "digest": "sha512", 00:21:51.575 "dhgroup": "ffdhe4096" 00:21:51.575 } 00:21:51.575 } 00:21:51.575 ]' 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.575 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.832 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:51.832 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.769 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.027 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.285 00:21:53.542 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.542 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.542 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.799 { 00:21:53.799 "cntlid": 127, 00:21:53.799 "qid": 0, 00:21:53.799 "state": "enabled", 00:21:53.799 "thread": "nvmf_tgt_poll_group_000", 00:21:53.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.799 "listen_address": { 00:21:53.799 "trtype": "TCP", 00:21:53.799 "adrfam": "IPv4", 00:21:53.799 "traddr": "10.0.0.2", 00:21:53.799 "trsvcid": "4420" 00:21:53.799 }, 00:21:53.799 "peer_address": { 00:21:53.799 "trtype": "TCP", 00:21:53.799 "adrfam": "IPv4", 00:21:53.799 "traddr": "10.0.0.1", 00:21:53.799 "trsvcid": "51292" 00:21:53.799 }, 00:21:53.799 "auth": { 00:21:53.799 "state": "completed", 00:21:53.799 "digest": "sha512", 00:21:53.799 "dhgroup": "ffdhe4096" 00:21:53.799 } 00:21:53.799 } 00:21:53.799 ]' 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.799 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.800 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.057 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:54.057 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.989 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.246 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.811 00:21:55.811 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.811 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.811 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.070 { 00:21:56.070 "cntlid": 129, 00:21:56.070 "qid": 0, 00:21:56.070 "state": "enabled", 00:21:56.070 "thread": "nvmf_tgt_poll_group_000", 00:21:56.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.070 "listen_address": { 00:21:56.070 "trtype": "TCP", 00:21:56.070 "adrfam": "IPv4", 00:21:56.070 "traddr": "10.0.0.2", 00:21:56.070 "trsvcid": "4420" 00:21:56.070 }, 00:21:56.070 "peer_address": { 00:21:56.070 "trtype": "TCP", 00:21:56.070 "adrfam": "IPv4", 00:21:56.070 "traddr": "10.0.0.1", 00:21:56.070 "trsvcid": "51316" 00:21:56.070 }, 00:21:56.070 "auth": { 00:21:56.070 "state": "completed", 00:21:56.070 "digest": "sha512", 00:21:56.070 "dhgroup": "ffdhe6144" 00:21:56.070 } 00:21:56.070 } 00:21:56.070 ]' 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.070 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.327 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:56.327 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.258 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.515 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:57.515 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.515 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.516 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.082 00:21:58.082 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.082 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.082 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.342 { 00:21:58.342 "cntlid": 131, 00:21:58.342 "qid": 0, 00:21:58.342 "state": "enabled", 00:21:58.342 "thread": "nvmf_tgt_poll_group_000", 00:21:58.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.342 "listen_address": { 00:21:58.342 "trtype": "TCP", 00:21:58.342 "adrfam": "IPv4", 00:21:58.342 "traddr": "10.0.0.2", 00:21:58.342 "trsvcid": "4420" 00:21:58.342 }, 00:21:58.342 "peer_address": { 00:21:58.342 "trtype": "TCP", 00:21:58.342 "adrfam": "IPv4", 00:21:58.342 "traddr": "10.0.0.1", 00:21:58.342 "trsvcid": "51930" 00:21:58.342 }, 00:21:58.342 "auth": { 00:21:58.342 "state": "completed", 00:21:58.342 "digest": "sha512", 00:21:58.342 "dhgroup": "ffdhe6144" 00:21:58.342 } 00:21:58.342 } 00:21:58.342 ]' 00:21:58.342 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.600 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.857 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:58.857 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.788 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.045 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.046 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.610 00:22:00.610 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.610 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.610 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.868 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.868 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.868 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.869 { 00:22:00.869 "cntlid": 133, 00:22:00.869 "qid": 0, 00:22:00.869 "state": "enabled", 00:22:00.869 "thread": "nvmf_tgt_poll_group_000", 00:22:00.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.869 "listen_address": { 00:22:00.869 "trtype": "TCP", 00:22:00.869 "adrfam": "IPv4", 00:22:00.869 "traddr": "10.0.0.2", 00:22:00.869 "trsvcid": "4420" 00:22:00.869 }, 00:22:00.869 "peer_address": { 00:22:00.869 "trtype": "TCP", 00:22:00.869 "adrfam": "IPv4", 00:22:00.869 "traddr": "10.0.0.1", 00:22:00.869 "trsvcid": "51946" 00:22:00.869 }, 00:22:00.869 "auth": { 00:22:00.869 "state": "completed", 00:22:00.869 "digest": "sha512", 00:22:00.869 "dhgroup": "ffdhe6144" 00:22:00.869 } 00:22:00.869 } 00:22:00.869 ]' 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.869 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.126 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.126 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.126 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.382 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:22:01.382 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.315 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.572 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.140 00:22:03.140 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.140 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.140 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.396 { 00:22:03.396 "cntlid": 135, 00:22:03.396 "qid": 0, 00:22:03.396 "state": "enabled", 00:22:03.396 "thread": "nvmf_tgt_poll_group_000", 00:22:03.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.396 "listen_address": { 00:22:03.396 "trtype": "TCP", 00:22:03.396 "adrfam": "IPv4", 00:22:03.396 "traddr": "10.0.0.2", 00:22:03.396 "trsvcid": "4420" 00:22:03.396 }, 00:22:03.396 "peer_address": { 00:22:03.396 "trtype": "TCP", 00:22:03.396 "adrfam": "IPv4", 00:22:03.396 "traddr": "10.0.0.1", 00:22:03.396 "trsvcid": "51964" 00:22:03.396 }, 00:22:03.396 "auth": { 00:22:03.396 "state": "completed", 00:22:03.396 "digest": "sha512", 00:22:03.396 "dhgroup": "ffdhe6144" 00:22:03.396 } 00:22:03.396 } 00:22:03.396 ]' 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.396 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.397 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.397 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.397 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.397 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.653 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:03.653 13:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:04.583 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.583 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.584 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.841 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.842 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.842 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.842 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.842 13:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.772 00:22:05.772 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.772 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.772 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.027 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.027 { 00:22:06.027 "cntlid": 137, 00:22:06.027 "qid": 0, 00:22:06.027 "state": "enabled", 00:22:06.027 "thread": "nvmf_tgt_poll_group_000", 00:22:06.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.028 "listen_address": { 00:22:06.028 "trtype": "TCP", 00:22:06.028 "adrfam": "IPv4", 00:22:06.028 "traddr": "10.0.0.2", 00:22:06.028 "trsvcid": "4420" 00:22:06.028 }, 00:22:06.028 "peer_address": { 00:22:06.028 "trtype": "TCP", 00:22:06.028 "adrfam": "IPv4", 00:22:06.028 "traddr": "10.0.0.1", 00:22:06.028 "trsvcid": "51992" 00:22:06.028 }, 00:22:06.028 "auth": { 00:22:06.028 "state": "completed", 00:22:06.028 "digest": "sha512", 00:22:06.028 "dhgroup": "ffdhe8192" 00:22:06.028 } 00:22:06.028 } 00:22:06.028 ]' 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.028 13:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.283 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:22:06.283 13:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:22:07.214 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.475 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.735 13:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.667 00:22:08.667 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.667 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.667 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.924 { 00:22:08.924 "cntlid": 139, 00:22:08.924 "qid": 0, 00:22:08.924 "state": "enabled", 00:22:08.924 "thread": "nvmf_tgt_poll_group_000", 00:22:08.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.924 "listen_address": { 00:22:08.924 "trtype": "TCP", 00:22:08.924 "adrfam": "IPv4", 00:22:08.924 "traddr": "10.0.0.2", 00:22:08.924 "trsvcid": "4420" 00:22:08.924 }, 00:22:08.924 "peer_address": { 00:22:08.924 "trtype": "TCP", 00:22:08.924 "adrfam": "IPv4", 00:22:08.924 "traddr": "10.0.0.1", 00:22:08.924 "trsvcid": "51670" 00:22:08.924 }, 00:22:08.924 "auth": { 00:22:08.924 "state": "completed", 00:22:08.924 "digest": "sha512", 00:22:08.924 "dhgroup": "ffdhe8192" 00:22:08.924 } 00:22:08.924 } 00:22:08.924 ]' 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.924 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.925 13:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.182 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:22:09.182 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: --dhchap-ctrl-secret DHHC-1:02:ZmIwMTY1NzMxOGMwODJkM2IxOWI4ZDE4YmJhNTM3MjRkNDg5NjE4NDU2OGMxZjBluPxwpw==: 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.114 13:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.373 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.347 00:22:11.347 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.347 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.347 13:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.608 { 00:22:11.608 "cntlid": 141, 00:22:11.608 "qid": 0, 00:22:11.608 "state": "enabled", 00:22:11.608 "thread": "nvmf_tgt_poll_group_000", 00:22:11.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.608 "listen_address": { 00:22:11.608 "trtype": "TCP", 00:22:11.608 "adrfam": "IPv4", 00:22:11.608 "traddr": "10.0.0.2", 00:22:11.608 "trsvcid": "4420" 00:22:11.608 }, 00:22:11.608 "peer_address": { 00:22:11.608 "trtype": "TCP", 00:22:11.608 "adrfam": "IPv4", 00:22:11.608 "traddr": "10.0.0.1", 00:22:11.608 "trsvcid": "51690" 00:22:11.608 }, 00:22:11.608 "auth": { 00:22:11.608 "state": "completed", 00:22:11.608 "digest": "sha512", 00:22:11.608 "dhgroup": "ffdhe8192" 00:22:11.608 } 00:22:11.608 } 00:22:11.608 ]' 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.608 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.866 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:22:11.866 13:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:01:NWQ4Y2EyZjBlY2RkY2Q3OGQ4OTI3NTRjNzEwZmMyZDYkqlhb: 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.800 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.058 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.993 00:22:13.993 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.993 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.993 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.250 { 00:22:14.250 "cntlid": 143, 00:22:14.250 "qid": 0, 00:22:14.250 "state": "enabled", 00:22:14.250 "thread": "nvmf_tgt_poll_group_000", 00:22:14.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.250 "listen_address": { 00:22:14.250 "trtype": "TCP", 00:22:14.250 "adrfam": "IPv4", 00:22:14.250 "traddr": "10.0.0.2", 00:22:14.250 "trsvcid": "4420" 00:22:14.250 }, 00:22:14.250 "peer_address": { 00:22:14.250 "trtype": "TCP", 00:22:14.250 "adrfam": "IPv4", 00:22:14.250 "traddr": "10.0.0.1", 00:22:14.250 "trsvcid": "51720" 00:22:14.250 }, 00:22:14.250 "auth": { 00:22:14.250 "state": "completed", 00:22:14.250 "digest": "sha512", 00:22:14.250 "dhgroup": "ffdhe8192" 00:22:14.250 } 00:22:14.250 } 00:22:14.250 ]' 00:22:14.250 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.250 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.250 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.250 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.250 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.508 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.508 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.508 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.765 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:14.765 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.756 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.076 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.709 00:22:16.709 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.709 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.709 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.992 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.992 { 00:22:16.992 "cntlid": 145, 00:22:16.992 "qid": 0, 00:22:16.993 "state": "enabled", 00:22:16.993 "thread": "nvmf_tgt_poll_group_000", 00:22:16.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.993 "listen_address": { 00:22:16.993 "trtype": "TCP", 00:22:16.993 "adrfam": "IPv4", 00:22:16.993 "traddr": "10.0.0.2", 00:22:16.993 "trsvcid": "4420" 00:22:16.993 }, 00:22:16.993 "peer_address": { 00:22:16.993 "trtype": "TCP", 00:22:16.993 "adrfam": "IPv4", 00:22:16.993 "traddr": "10.0.0.1", 00:22:16.993 "trsvcid": "40186" 00:22:16.993 }, 00:22:16.993 "auth": { 00:22:16.993 "state": "completed", 00:22:16.993 "digest": "sha512", 00:22:16.993 "dhgroup": "ffdhe8192" 00:22:16.993 } 00:22:16.993 } 00:22:16.993 ]' 00:22:16.993 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.251 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.509 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:22:17.509 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ODk4MGE3ZmY0ZTUzYzk0OTE5YWM0NTg5NWU4NzdmMWVhOWEyZWQ4ZmM0NmFiNzY0mHfK3A==: --dhchap-ctrl-secret DHHC-1:03:ODllOTYzNDM2ZDQzMjZiOWM4NDIyZmIxYWVkMjliYjY0MTM4ZGQwZThmMjU3NjEwNmU2MzE2YTVlZjMwZWFmNZo29Sk=: 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:18.457 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:19.392 request: 00:22:19.392 { 00:22:19.392 "name": "nvme0", 00:22:19.392 "trtype": "tcp", 00:22:19.392 "traddr": "10.0.0.2", 00:22:19.392 "adrfam": "ipv4", 00:22:19.392 "trsvcid": "4420", 00:22:19.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.392 "prchk_reftag": false, 00:22:19.392 "prchk_guard": false, 00:22:19.392 "hdgst": false, 00:22:19.392 "ddgst": false, 00:22:19.392 "dhchap_key": "key2", 00:22:19.392 "allow_unrecognized_csi": false, 00:22:19.392 "method": "bdev_nvme_attach_controller", 00:22:19.392 "req_id": 1 00:22:19.392 } 00:22:19.392 Got JSON-RPC error response 00:22:19.392 response: 00:22:19.392 { 00:22:19.392 "code": -5, 00:22:19.392 "message": "Input/output error" 00:22:19.392 } 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.392 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:19.957 request: 00:22:19.957 { 00:22:19.957 "name": "nvme0", 00:22:19.957 "trtype": "tcp", 00:22:19.957 "traddr": "10.0.0.2", 00:22:19.957 "adrfam": "ipv4", 00:22:19.957 "trsvcid": "4420", 00:22:19.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.957 "prchk_reftag": false, 00:22:19.957 "prchk_guard": false, 00:22:19.957 "hdgst": false, 00:22:19.957 "ddgst": false, 00:22:19.958 "dhchap_key": "key1", 00:22:19.958 "dhchap_ctrlr_key": "ckey2", 00:22:19.958 "allow_unrecognized_csi": false, 00:22:19.958 "method": "bdev_nvme_attach_controller", 00:22:19.958 "req_id": 1 00:22:19.958 } 00:22:19.958 Got JSON-RPC error response 00:22:19.958 response: 00:22:19.958 { 00:22:19.958 "code": -5, 00:22:19.958 "message": "Input/output error" 00:22:19.958 } 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.958 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.891 request: 00:22:20.891 { 00:22:20.891 "name": "nvme0", 00:22:20.891 "trtype": "tcp", 00:22:20.891 "traddr": "10.0.0.2", 00:22:20.891 "adrfam": "ipv4", 00:22:20.891 "trsvcid": "4420", 00:22:20.891 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.891 "prchk_reftag": false, 00:22:20.891 "prchk_guard": false, 00:22:20.891 "hdgst": false, 00:22:20.891 "ddgst": false, 00:22:20.891 "dhchap_key": "key1", 00:22:20.891 "dhchap_ctrlr_key": "ckey1", 00:22:20.891 "allow_unrecognized_csi": false, 00:22:20.891 "method": "bdev_nvme_attach_controller", 00:22:20.891 "req_id": 1 00:22:20.891 } 00:22:20.891 Got JSON-RPC error response 00:22:20.891 response: 00:22:20.891 { 00:22:20.891 "code": -5, 00:22:20.891 "message": "Input/output error" 00:22:20.891 } 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 240512 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 240512 ']' 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 240512 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 240512 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 240512' 00:22:20.891 killing process with pid 240512 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 240512 00:22:20.891 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 240512 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=263498 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 263498 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 263498 ']' 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.150 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 263498 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 263498 ']' 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.409 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.667 null0 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wcM 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.667 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1gM ]] 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gM 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bFx 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.VHk ]] 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VHk 00:22:21.925 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ony 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XQt ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XQt 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mbh 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.926 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.299 nvme0n1 00:22:23.299 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.299 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.299 13:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.557 { 00:22:23.557 "cntlid": 1, 00:22:23.557 "qid": 0, 00:22:23.557 "state": "enabled", 00:22:23.557 "thread": "nvmf_tgt_poll_group_000", 00:22:23.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.557 "listen_address": { 00:22:23.557 "trtype": "TCP", 00:22:23.557 "adrfam": "IPv4", 00:22:23.557 "traddr": "10.0.0.2", 00:22:23.557 "trsvcid": "4420" 00:22:23.557 }, 00:22:23.557 "peer_address": { 00:22:23.557 "trtype": "TCP", 00:22:23.557 "adrfam": "IPv4", 00:22:23.557 "traddr": "10.0.0.1", 00:22:23.557 "trsvcid": "40222" 00:22:23.557 }, 00:22:23.557 "auth": { 00:22:23.557 "state": "completed", 00:22:23.557 "digest": "sha512", 00:22:23.557 "dhgroup": "ffdhe8192" 00:22:23.557 } 00:22:23.557 } 00:22:23.557 ]' 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.557 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.815 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:23.815 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:24.748 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.006 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.263 request: 00:22:25.263 { 00:22:25.263 "name": "nvme0", 00:22:25.263 "trtype": "tcp", 00:22:25.263 "traddr": "10.0.0.2", 00:22:25.263 "adrfam": "ipv4", 00:22:25.263 "trsvcid": "4420", 00:22:25.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.264 "prchk_reftag": false, 00:22:25.264 "prchk_guard": false, 00:22:25.264 "hdgst": false, 00:22:25.264 "ddgst": false, 00:22:25.264 "dhchap_key": "key3", 00:22:25.264 "allow_unrecognized_csi": false, 00:22:25.264 "method": "bdev_nvme_attach_controller", 00:22:25.264 "req_id": 1 00:22:25.264 } 00:22:25.264 Got JSON-RPC error response 00:22:25.264 response: 00:22:25.264 { 00:22:25.264 "code": -5, 00:22:25.264 "message": "Input/output error" 00:22:25.264 } 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.521 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.780 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.037 request: 00:22:26.037 { 00:22:26.037 "name": "nvme0", 00:22:26.037 "trtype": "tcp", 00:22:26.037 "traddr": "10.0.0.2", 00:22:26.037 "adrfam": "ipv4", 00:22:26.037 "trsvcid": "4420", 00:22:26.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.037 "prchk_reftag": false, 00:22:26.037 "prchk_guard": false, 00:22:26.038 "hdgst": false, 00:22:26.038 "ddgst": false, 00:22:26.038 "dhchap_key": "key3", 00:22:26.038 "allow_unrecognized_csi": false, 00:22:26.038 "method": "bdev_nvme_attach_controller", 00:22:26.038 "req_id": 1 00:22:26.038 } 00:22:26.038 Got JSON-RPC error response 00:22:26.038 response: 00:22:26.038 { 00:22:26.038 "code": -5, 00:22:26.038 "message": "Input/output error" 00:22:26.038 } 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.038 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.295 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.861 request: 00:22:26.861 { 00:22:26.861 "name": "nvme0", 00:22:26.861 "trtype": "tcp", 00:22:26.861 "traddr": "10.0.0.2", 00:22:26.861 "adrfam": "ipv4", 00:22:26.861 "trsvcid": "4420", 00:22:26.861 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.861 "prchk_reftag": false, 00:22:26.861 "prchk_guard": false, 00:22:26.861 "hdgst": false, 00:22:26.861 "ddgst": false, 00:22:26.861 "dhchap_key": "key0", 00:22:26.861 "dhchap_ctrlr_key": "key1", 00:22:26.861 "allow_unrecognized_csi": false, 00:22:26.861 "method": "bdev_nvme_attach_controller", 00:22:26.861 "req_id": 1 00:22:26.861 } 00:22:26.861 Got JSON-RPC error response 00:22:26.861 response: 00:22:26.861 { 00:22:26.861 "code": -5, 00:22:26.861 "message": "Input/output error" 00:22:26.861 } 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:26.861 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:27.119 nvme0n1 00:22:27.119 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:27.119 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.119 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:27.377 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.377 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.377 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.639 13:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.012 nvme0n1 00:22:29.012 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:29.012 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.012 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:29.270 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.528 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.528 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:29.528 13:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: --dhchap-ctrl-secret DHHC-1:03:NTczMjAyY2I0ZGYyZjJiYzc4ZTQ5NzQ0YmMxNGE2ZmVlM2Q0OGEzZDU3ODk4NjQyMzZjZTZhNzRkNTE2ODU2NCTDr+4=: 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.461 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.719 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.652 request: 00:22:31.652 { 00:22:31.652 "name": "nvme0", 00:22:31.652 "trtype": "tcp", 00:22:31.652 "traddr": "10.0.0.2", 00:22:31.652 "adrfam": "ipv4", 00:22:31.652 "trsvcid": "4420", 00:22:31.652 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:31.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.652 "prchk_reftag": false, 00:22:31.652 "prchk_guard": false, 00:22:31.652 "hdgst": false, 00:22:31.652 "ddgst": false, 00:22:31.652 "dhchap_key": "key1", 00:22:31.652 "allow_unrecognized_csi": false, 00:22:31.652 "method": "bdev_nvme_attach_controller", 00:22:31.652 "req_id": 1 00:22:31.652 } 00:22:31.652 Got JSON-RPC error response 00:22:31.652 response: 00:22:31.652 { 00:22:31.652 "code": -5, 00:22:31.652 "message": "Input/output error" 00:22:31.652 } 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.652 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:33.024 nvme0n1 00:22:33.024 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:33.024 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:33.024 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.282 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.282 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.282 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:33.539 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:33.796 nvme0n1 00:22:33.796 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:33.796 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:33.796 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.053 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.053 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.053 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: '' 2s 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: ]] 00:22:34.311 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWYxZDI3MzYyNmNmMWM3ODM4MjJiNDQwYWJkZGUxMWQNTdt9: 00:22:34.568 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:34.568 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:34.568 13:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: 2s 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: ]] 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTAyYWU2OTJiOTFjYmNmN2U3YjRjMjhlZjYxYzZkNmVhMmQ1NDAxZmQ5MmRlZmVjE0hqJQ==: 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:36.467 13:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:38.366 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.624 13:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.997 nvme0n1 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.997 13:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.931 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:40.931 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:40.931 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:41.190 13:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:41.448 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:41.448 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:41.448 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.706 13:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:42.640 request: 00:22:42.640 { 00:22:42.640 "name": "nvme0", 00:22:42.640 "dhchap_key": "key1", 00:22:42.640 "dhchap_ctrlr_key": "key3", 00:22:42.640 "method": "bdev_nvme_set_keys", 00:22:42.640 "req_id": 1 00:22:42.640 } 00:22:42.640 Got JSON-RPC error response 00:22:42.640 response: 00:22:42.640 { 00:22:42.640 "code": -13, 00:22:42.640 "message": "Permission denied" 00:22:42.640 } 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:42.640 13:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.012 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:45.385 nvme0n1 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:45.386 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:46.317 request: 00:22:46.317 { 00:22:46.317 "name": "nvme0", 00:22:46.317 "dhchap_key": "key2", 00:22:46.317 "dhchap_ctrlr_key": "key0", 00:22:46.317 "method": "bdev_nvme_set_keys", 00:22:46.317 "req_id": 1 00:22:46.317 } 00:22:46.317 Got JSON-RPC error response 00:22:46.317 response: 00:22:46.317 { 00:22:46.317 "code": -13, 00:22:46.317 "message": "Permission denied" 00:22:46.317 } 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:46.317 13:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.574 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:46.574 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:47.520 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:47.520 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:47.520 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 240532 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 240532 ']' 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 240532 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 240532 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 240532' 00:22:47.779 killing process with pid 240532 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 240532 00:22:47.779 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 240532 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.344 13:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.344 rmmod nvme_tcp 00:22:48.344 rmmod nvme_fabrics 00:22:48.344 rmmod nvme_keyring 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 263498 ']' 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 263498 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 263498 ']' 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 263498 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 263498 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 263498' 00:22:48.344 killing process with pid 263498 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 263498 00:22:48.344 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 263498 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.604 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wcM /tmp/spdk.key-sha256.bFx /tmp/spdk.key-sha384.ony /tmp/spdk.key-sha512.mbh /tmp/spdk.key-sha512.1gM /tmp/spdk.key-sha384.VHk /tmp/spdk.key-sha256.XQt '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:50.511 00:22:50.511 real 3m33.749s 00:22:50.511 user 8m18.976s 00:22:50.511 sys 0m29.080s 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.511 ************************************ 00:22:50.511 END TEST nvmf_auth_target 00:22:50.511 ************************************ 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.511 ************************************ 00:22:50.511 START TEST nvmf_bdevio_no_huge 00:22:50.511 ************************************ 00:22:50.511 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:50.770 * Looking for test storage... 00:22:50.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:50.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.770 --rc genhtml_branch_coverage=1 00:22:50.770 --rc genhtml_function_coverage=1 00:22:50.770 --rc genhtml_legend=1 00:22:50.770 --rc geninfo_all_blocks=1 00:22:50.770 --rc geninfo_unexecuted_blocks=1 00:22:50.770 00:22:50.770 ' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:50.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.770 --rc genhtml_branch_coverage=1 00:22:50.770 --rc genhtml_function_coverage=1 00:22:50.770 --rc genhtml_legend=1 00:22:50.770 --rc geninfo_all_blocks=1 00:22:50.770 --rc geninfo_unexecuted_blocks=1 00:22:50.770 00:22:50.770 ' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:50.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.770 --rc genhtml_branch_coverage=1 00:22:50.770 --rc genhtml_function_coverage=1 00:22:50.770 --rc genhtml_legend=1 00:22:50.770 --rc geninfo_all_blocks=1 00:22:50.770 --rc geninfo_unexecuted_blocks=1 00:22:50.770 00:22:50.770 ' 00:22:50.770 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:50.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.771 --rc genhtml_branch_coverage=1 00:22:50.771 --rc genhtml_function_coverage=1 00:22:50.771 --rc genhtml_legend=1 00:22:50.771 --rc geninfo_all_blocks=1 00:22:50.771 --rc geninfo_unexecuted_blocks=1 00:22:50.771 00:22:50.771 ' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.771 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.306 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:53.307 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:53.307 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:53.307 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:53.307 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:22:53.307 00:22:53.307 --- 10.0.0.2 ping statistics --- 00:22:53.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.307 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:22:53.307 00:22:53.307 --- 10.0.0.1 ping statistics --- 00:22:53.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.307 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.307 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=268769 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 268769 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 268769 ']' 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.308 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 [2024-10-14 13:33:44.841440] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:22:53.308 [2024-10-14 13:33:44.841538] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:53.308 [2024-10-14 13:33:44.910304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.308 [2024-10-14 13:33:44.954568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.308 [2024-10-14 13:33:44.954635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.308 [2024-10-14 13:33:44.954649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.308 [2024-10-14 13:33:44.954660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.308 [2024-10-14 13:33:44.954669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.308 [2024-10-14 13:33:44.955713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:53.308 [2024-10-14 13:33:44.955777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:53.308 [2024-10-14 13:33:44.955826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:53.308 [2024-10-14 13:33:44.955828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 [2024-10-14 13:33:45.098997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 Malloc0 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:53.308 [2024-10-14 13:33:45.139328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:53.308 { 00:22:53.308 "params": { 00:22:53.308 "name": "Nvme$subsystem", 00:22:53.308 "trtype": "$TEST_TRANSPORT", 00:22:53.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.308 "adrfam": "ipv4", 00:22:53.308 "trsvcid": "$NVMF_PORT", 00:22:53.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.308 "hdgst": ${hdgst:-false}, 00:22:53.308 "ddgst": ${ddgst:-false} 00:22:53.308 }, 00:22:53.308 "method": "bdev_nvme_attach_controller" 00:22:53.308 } 00:22:53.308 EOF 00:22:53.308 )") 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:22:53.308 13:33:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:53.308 "params": { 00:22:53.308 "name": "Nvme1", 00:22:53.308 "trtype": "tcp", 00:22:53.308 "traddr": "10.0.0.2", 00:22:53.308 "adrfam": "ipv4", 00:22:53.308 "trsvcid": "4420", 00:22:53.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.308 "hdgst": false, 00:22:53.308 "ddgst": false 00:22:53.308 }, 00:22:53.308 "method": "bdev_nvme_attach_controller" 00:22:53.308 }' 00:22:53.566 [2024-10-14 13:33:45.184977] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:22:53.566 [2024-10-14 13:33:45.185051] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid268793 ] 00:22:53.566 [2024-10-14 13:33:45.246102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.566 [2024-10-14 13:33:45.295936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.566 [2024-10-14 13:33:45.295989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.566 [2024-10-14 13:33:45.295992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.824 I/O targets: 00:22:53.824 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:53.824 00:22:53.824 00:22:53.824 CUnit - A unit testing framework for C - Version 2.1-3 00:22:53.824 http://cunit.sourceforge.net/ 00:22:53.824 00:22:53.824 00:22:53.824 Suite: bdevio tests on: Nvme1n1 00:22:53.824 Test: blockdev write read block ...passed 00:22:53.824 Test: blockdev write zeroes read block ...passed 00:22:53.824 Test: blockdev write zeroes read no split ...passed 00:22:53.824 Test: blockdev write zeroes read split ...passed 00:22:53.824 Test: blockdev write zeroes read split partial ...passed 00:22:53.824 Test: blockdev reset ...[2024-10-14 13:33:45.603318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.824 [2024-10-14 13:33:45.603438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1570 (9): Bad file descriptor 00:22:54.082 [2024-10-14 13:33:45.751659] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.082 passed 00:22:54.082 Test: blockdev write read 8 blocks ...passed 00:22:54.082 Test: blockdev write read size > 128k ...passed 00:22:54.082 Test: blockdev write read invalid size ...passed 00:22:54.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:54.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:54.082 Test: blockdev write read max offset ...passed 00:22:54.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:54.082 Test: blockdev writev readv 8 blocks ...passed 00:22:54.082 Test: blockdev writev readv 30 x 1block ...passed 00:22:54.082 Test: blockdev writev readv block ...passed 00:22:54.082 Test: blockdev writev readv size > 128k ...passed 00:22:54.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:54.082 Test: blockdev comparev and writev ...[2024-10-14 13:33:45.924463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.082 [2024-10-14 13:33:45.924499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.082 [2024-10-14 13:33:45.924525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.082 [2024-10-14 13:33:45.924543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.082 [2024-10-14 13:33:45.924837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.082 [2024-10-14 13:33:45.924862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.082 [2024-10-14 13:33:45.924884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.082 [2024-10-14 13:33:45.924902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.082 [2024-10-14 13:33:45.925216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.083 [2024-10-14 13:33:45.925241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.083 [2024-10-14 13:33:45.925263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.083 [2024-10-14 13:33:45.925279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.083 [2024-10-14 13:33:45.925570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.083 [2024-10-14 13:33:45.925594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.083 [2024-10-14 13:33:45.925616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:54.083 [2024-10-14 13:33:45.925632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.341 passed 00:22:54.341 Test: blockdev nvme passthru rw ...passed 00:22:54.341 Test: blockdev nvme passthru vendor specific ...[2024-10-14 13:33:46.008342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.341 [2024-10-14 13:33:46.008369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.341 [2024-10-14 13:33:46.008507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.341 [2024-10-14 13:33:46.008531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.341 [2024-10-14 13:33:46.008668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.341 [2024-10-14 13:33:46.008691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.341 [2024-10-14 13:33:46.008826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.341 [2024-10-14 13:33:46.008850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.341 passed 00:22:54.341 Test: blockdev nvme admin passthru ...passed 00:22:54.341 Test: blockdev copy ...passed 00:22:54.341 00:22:54.341 Run Summary: Type Total Ran Passed Failed Inactive 00:22:54.341 suites 1 1 n/a 0 0 00:22:54.341 tests 23 23 23 0 0 00:22:54.341 asserts 152 152 152 0 n/a 00:22:54.341 00:22:54.341 Elapsed time = 1.167 seconds 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.599 rmmod nvme_tcp 00:22:54.599 rmmod nvme_fabrics 00:22:54.599 rmmod nvme_keyring 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 268769 ']' 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 268769 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 268769 ']' 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 268769 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.599 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268769 00:22:54.857 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:54.857 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:54.857 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268769' 00:22:54.857 killing process with pid 268769 00:22:54.857 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 268769 00:22:54.857 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 268769 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:22:55.115 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:55.116 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.116 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.116 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.116 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.116 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.021 00:22:57.021 real 0m6.495s 00:22:57.021 user 0m10.115s 00:22:57.021 sys 0m2.521s 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:57.021 ************************************ 00:22:57.021 END TEST nvmf_bdevio_no_huge 00:22:57.021 ************************************ 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.021 13:33:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:57.280 ************************************ 00:22:57.280 START TEST nvmf_tls 00:22:57.280 ************************************ 00:22:57.280 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:57.280 * Looking for test storage... 00:22:57.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.280 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:57.280 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:57.280 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.280 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:57.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.281 --rc genhtml_branch_coverage=1 00:22:57.281 --rc genhtml_function_coverage=1 00:22:57.281 --rc genhtml_legend=1 00:22:57.281 --rc geninfo_all_blocks=1 00:22:57.281 --rc geninfo_unexecuted_blocks=1 00:22:57.281 00:22:57.281 ' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:57.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.281 --rc genhtml_branch_coverage=1 00:22:57.281 --rc genhtml_function_coverage=1 00:22:57.281 --rc genhtml_legend=1 00:22:57.281 --rc geninfo_all_blocks=1 00:22:57.281 --rc geninfo_unexecuted_blocks=1 00:22:57.281 00:22:57.281 ' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:57.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.281 --rc genhtml_branch_coverage=1 00:22:57.281 --rc genhtml_function_coverage=1 00:22:57.281 --rc genhtml_legend=1 00:22:57.281 --rc geninfo_all_blocks=1 00:22:57.281 --rc geninfo_unexecuted_blocks=1 00:22:57.281 00:22:57.281 ' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:57.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.281 --rc genhtml_branch_coverage=1 00:22:57.281 --rc genhtml_function_coverage=1 00:22:57.281 --rc genhtml_legend=1 00:22:57.281 --rc geninfo_all_blocks=1 00:22:57.281 --rc geninfo_unexecuted_blocks=1 00:22:57.281 00:22:57.281 ' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.281 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:59.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:59.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:59.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:59.819 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.819 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:22:59.820 00:22:59.820 --- 10.0.0.2 ping statistics --- 00:22:59.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.820 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:22:59.820 00:22:59.820 --- 10.0.0.1 ping statistics --- 00:22:59.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.820 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=270989 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 270989 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 270989 ']' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.820 [2024-10-14 13:33:51.406915] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:22:59.820 [2024-10-14 13:33:51.407002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.820 [2024-10-14 13:33:51.474809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.820 [2024-10-14 13:33:51.521632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.820 [2024-10-14 13:33:51.521687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.820 [2024-10-14 13:33:51.521722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.820 [2024-10-14 13:33:51.521733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.820 [2024-10-14 13:33:51.521743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.820 [2024-10-14 13:33:51.522359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:59.820 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:00.078 true 00:23:00.078 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.078 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:00.335 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:00.335 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:00.335 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:00.901 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.901 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:01.158 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:01.158 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:01.158 13:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:01.416 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.416 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:01.674 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:01.674 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:01.674 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.674 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:01.932 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:01.932 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:01.932 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:02.190 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.190 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:02.448 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:02.448 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:02.448 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:02.706 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.706 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:02.964 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pSPuAl0Z8f 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8w7AwmdtKz 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pSPuAl0Z8f 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8w7AwmdtKz 00:23:03.222 13:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:03.480 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:03.737 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pSPuAl0Z8f 00:23:03.737 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pSPuAl0Z8f 00:23:03.737 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.302 [2024-10-14 13:33:55.882609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.302 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.559 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.816 [2024-10-14 13:33:56.428148] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.816 [2024-10-14 13:33:56.428436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.816 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.074 malloc0 00:23:05.074 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.332 13:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pSPuAl0Z8f 00:23:05.589 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.847 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pSPuAl0Z8f 00:23:15.815 Initializing NVMe Controllers 00:23:15.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.815 Initialization complete. Launching workers. 00:23:15.815 ======================================================== 00:23:15.815 Latency(us) 00:23:15.815 Device Information : IOPS MiB/s Average min max 00:23:15.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8382.67 32.74 7637.08 1027.97 10554.46 00:23:15.815 ======================================================== 00:23:15.815 Total : 8382.67 32.74 7637.08 1027.97 10554.46 00:23:15.815 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSPuAl0Z8f 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pSPuAl0Z8f 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=272912 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 272912 /var/tmp/bdevperf.sock 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 272912 ']' 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.815 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.073 [2024-10-14 13:34:07.683012] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:16.073 [2024-10-14 13:34:07.683091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272912 ] 00:23:16.073 [2024-10-14 13:34:07.742949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.073 [2024-10-14 13:34:07.790092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.073 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.073 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.073 13:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pSPuAl0Z8f 00:23:16.330 13:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.895 [2024-10-14 13:34:08.478817] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.895 TLSTESTn1 00:23:16.895 13:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.895 Running I/O for 10 seconds... 00:23:19.204 2748.00 IOPS, 10.73 MiB/s [2024-10-14T11:34:11.990Z] 2824.50 IOPS, 11.03 MiB/s [2024-10-14T11:34:12.924Z] 2867.33 IOPS, 11.20 MiB/s [2024-10-14T11:34:13.857Z] 2874.75 IOPS, 11.23 MiB/s [2024-10-14T11:34:14.790Z] 2867.40 IOPS, 11.20 MiB/s [2024-10-14T11:34:15.721Z] 2883.17 IOPS, 11.26 MiB/s [2024-10-14T11:34:17.093Z] 2890.71 IOPS, 11.29 MiB/s [2024-10-14T11:34:18.026Z] 2890.75 IOPS, 11.29 MiB/s [2024-10-14T11:34:18.962Z] 2886.33 IOPS, 11.27 MiB/s [2024-10-14T11:34:18.962Z] 2883.40 IOPS, 11.26 MiB/s 00:23:27.109 Latency(us) 00:23:27.109 [2024-10-14T11:34:18.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.109 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.109 Verification LBA range: start 0x0 length 0x2000 00:23:27.109 TLSTESTn1 : 10.05 2882.37 11.26 0.00 0.00 44285.93 10534.31 56700.78 00:23:27.109 [2024-10-14T11:34:18.962Z] =================================================================================================================== 00:23:27.109 [2024-10-14T11:34:18.962Z] Total : 2882.37 11.26 0.00 0.00 44285.93 10534.31 56700.78 00:23:27.109 { 00:23:27.109 "results": [ 00:23:27.109 { 00:23:27.109 "job": "TLSTESTn1", 00:23:27.109 "core_mask": "0x4", 00:23:27.109 "workload": "verify", 00:23:27.109 "status": "finished", 00:23:27.109 "verify_range": { 00:23:27.109 "start": 0, 00:23:27.109 "length": 8192 00:23:27.109 }, 00:23:27.109 "queue_depth": 128, 00:23:27.109 "io_size": 4096, 00:23:27.109 "runtime": 10.047995, 00:23:27.109 "iops": 2882.366083979938, 00:23:27.109 "mibps": 11.259242515546633, 00:23:27.109 "io_failed": 0, 00:23:27.109 "io_timeout": 0, 00:23:27.109 "avg_latency_us": 44285.926815571875, 00:23:27.109 "min_latency_us": 10534.305185185185, 00:23:27.109 "max_latency_us": 56700.776296296295 00:23:27.109 } 00:23:27.109 ], 00:23:27.109 "core_count": 1 00:23:27.109 } 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 272912 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 272912 ']' 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 272912 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272912 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272912' 00:23:27.109 killing process with pid 272912 00:23:27.109 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 272912 00:23:27.109 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.109 00:23:27.109 Latency(us) 00:23:27.109 [2024-10-14T11:34:18.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.109 [2024-10-14T11:34:18.962Z] =================================================================================================================== 00:23:27.109 [2024-10-14T11:34:18.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.110 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 272912 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8w7AwmdtKz 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8w7AwmdtKz 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8w7AwmdtKz 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8w7AwmdtKz 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274226 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274226 /var/tmp/bdevperf.sock 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274226 ']' 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.368 13:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.368 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.368 [2024-10-14 13:34:19.049242] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:27.368 [2024-10-14 13:34:19.049332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274226 ] 00:23:27.368 [2024-10-14 13:34:19.112639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.368 [2024-10-14 13:34:19.162725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.626 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:27.626 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:27.626 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8w7AwmdtKz 00:23:27.883 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.142 [2024-10-14 13:34:19.887488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.142 [2024-10-14 13:34:19.897167] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.142 [2024-10-14 13:34:19.897613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6b70 (107): Transport endpoint is not connected 00:23:28.142 [2024-10-14 13:34:19.898606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6b70 (9): Bad file descriptor 00:23:28.142 [2024-10-14 13:34:19.899605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.142 [2024-10-14 13:34:19.899624] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.142 [2024-10-14 13:34:19.899652] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:28.142 [2024-10-14 13:34:19.899671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.142 request: 00:23:28.142 { 00:23:28.142 "name": "TLSTEST", 00:23:28.142 "trtype": "tcp", 00:23:28.142 "traddr": "10.0.0.2", 00:23:28.142 "adrfam": "ipv4", 00:23:28.142 "trsvcid": "4420", 00:23:28.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.142 "prchk_reftag": false, 00:23:28.142 "prchk_guard": false, 00:23:28.142 "hdgst": false, 00:23:28.142 "ddgst": false, 00:23:28.142 "psk": "key0", 00:23:28.142 "allow_unrecognized_csi": false, 00:23:28.142 "method": "bdev_nvme_attach_controller", 00:23:28.142 "req_id": 1 00:23:28.142 } 00:23:28.142 Got JSON-RPC error response 00:23:28.142 response: 00:23:28.142 { 00:23:28.142 "code": -5, 00:23:28.142 "message": "Input/output error" 00:23:28.142 } 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274226 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274226 ']' 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274226 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274226 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274226' 00:23:28.142 killing process with pid 274226 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274226 00:23:28.142 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.142 00:23:28.142 Latency(us) 00:23:28.142 [2024-10-14T11:34:19.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.142 [2024-10-14T11:34:19.995Z] =================================================================================================================== 00:23:28.142 [2024-10-14T11:34:19.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.142 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274226 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pSPuAl0Z8f 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pSPuAl0Z8f 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pSPuAl0Z8f 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pSPuAl0Z8f 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274370 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.400 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274370 /var/tmp/bdevperf.sock 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274370 ']' 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.401 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.401 [2024-10-14 13:34:20.170369] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:28.401 [2024-10-14 13:34:20.170486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274370 ] 00:23:28.401 [2024-10-14 13:34:20.233813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.659 [2024-10-14 13:34:20.283267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.659 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.659 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.659 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pSPuAl0Z8f 00:23:28.917 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:29.175 [2024-10-14 13:34:20.916185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.175 [2024-10-14 13:34:20.921713] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:29.175 [2024-10-14 13:34:20.921748] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:29.175 [2024-10-14 13:34:20.921804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.175 [2024-10-14 13:34:20.922323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x737b70 (107): Transport endpoint is not connected 00:23:29.175 [2024-10-14 13:34:20.923312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x737b70 (9): Bad file descriptor 00:23:29.175 [2024-10-14 13:34:20.924311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.175 [2024-10-14 13:34:20.924333] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.175 [2024-10-14 13:34:20.924347] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:29.175 [2024-10-14 13:34:20.924365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.175 request: 00:23:29.175 { 00:23:29.175 "name": "TLSTEST", 00:23:29.175 "trtype": "tcp", 00:23:29.175 "traddr": "10.0.0.2", 00:23:29.175 "adrfam": "ipv4", 00:23:29.175 "trsvcid": "4420", 00:23:29.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.176 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.176 "prchk_reftag": false, 00:23:29.176 "prchk_guard": false, 00:23:29.176 "hdgst": false, 00:23:29.176 "ddgst": false, 00:23:29.176 "psk": "key0", 00:23:29.176 "allow_unrecognized_csi": false, 00:23:29.176 "method": "bdev_nvme_attach_controller", 00:23:29.176 "req_id": 1 00:23:29.176 } 00:23:29.176 Got JSON-RPC error response 00:23:29.176 response: 00:23:29.176 { 00:23:29.176 "code": -5, 00:23:29.176 "message": "Input/output error" 00:23:29.176 } 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274370 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274370 ']' 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274370 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274370 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274370' 00:23:29.176 killing process with pid 274370 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274370 00:23:29.176 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.176 00:23:29.176 Latency(us) 00:23:29.176 [2024-10-14T11:34:21.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.176 [2024-10-14T11:34:21.029Z] =================================================================================================================== 00:23:29.176 [2024-10-14T11:34:21.029Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.176 13:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274370 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSPuAl0Z8f 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSPuAl0Z8f 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pSPuAl0Z8f 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pSPuAl0Z8f 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274511 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274511 /var/tmp/bdevperf.sock 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274511 ']' 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.434 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.434 [2024-10-14 13:34:21.223784] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:29.434 [2024-10-14 13:34:21.223862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274511 ] 00:23:29.434 [2024-10-14 13:34:21.282159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.692 [2024-10-14 13:34:21.327465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.692 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.692 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:29.692 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pSPuAl0Z8f 00:23:29.950 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.208 [2024-10-14 13:34:21.965298] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.208 [2024-10-14 13:34:21.972575] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.208 [2024-10-14 13:34:21.972605] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:30.208 [2024-10-14 13:34:21.972667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.208 [2024-10-14 13:34:21.973641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215cb70 (107): Transport endpoint is not connected 00:23:30.208 [2024-10-14 13:34:21.974621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215cb70 (9): Bad file descriptor 00:23:30.208 [2024-10-14 13:34:21.975619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:30.208 [2024-10-14 13:34:21.975650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.208 [2024-10-14 13:34:21.975677] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:30.208 [2024-10-14 13:34:21.975695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:30.208 request: 00:23:30.208 { 00:23:30.208 "name": "TLSTEST", 00:23:30.208 "trtype": "tcp", 00:23:30.208 "traddr": "10.0.0.2", 00:23:30.208 "adrfam": "ipv4", 00:23:30.208 "trsvcid": "4420", 00:23:30.208 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.208 "prchk_reftag": false, 00:23:30.208 "prchk_guard": false, 00:23:30.208 "hdgst": false, 00:23:30.208 "ddgst": false, 00:23:30.208 "psk": "key0", 00:23:30.208 "allow_unrecognized_csi": false, 00:23:30.208 "method": "bdev_nvme_attach_controller", 00:23:30.208 "req_id": 1 00:23:30.208 } 00:23:30.208 Got JSON-RPC error response 00:23:30.208 response: 00:23:30.208 { 00:23:30.208 "code": -5, 00:23:30.208 "message": "Input/output error" 00:23:30.208 } 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274511 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274511 ']' 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274511 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.208 13:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274511 00:23:30.208 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:30.208 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:30.208 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274511' 00:23:30.208 killing process with pid 274511 00:23:30.208 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274511 00:23:30.208 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.208 00:23:30.208 Latency(us) 00:23:30.208 [2024-10-14T11:34:22.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.208 [2024-10-14T11:34:22.061Z] =================================================================================================================== 00:23:30.208 [2024-10-14T11:34:22.062Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.209 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274511 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274652 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274652 /var/tmp/bdevperf.sock 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274652 ']' 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.467 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.467 [2024-10-14 13:34:22.244238] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:30.467 [2024-10-14 13:34:22.244316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274652 ] 00:23:30.467 [2024-10-14 13:34:22.302898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.725 [2024-10-14 13:34:22.351471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.725 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.725 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:30.725 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:30.982 [2024-10-14 13:34:22.726240] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:30.983 [2024-10-14 13:34:22.726284] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:30.983 request: 00:23:30.983 { 00:23:30.983 "name": "key0", 00:23:30.983 "path": "", 00:23:30.983 "method": "keyring_file_add_key", 00:23:30.983 "req_id": 1 00:23:30.983 } 00:23:30.983 Got JSON-RPC error response 00:23:30.983 response: 00:23:30.983 { 00:23:30.983 "code": -1, 00:23:30.983 "message": "Operation not permitted" 00:23:30.983 } 00:23:30.983 13:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.240 [2024-10-14 13:34:22.987033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.240 [2024-10-14 13:34:22.987092] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:31.240 request: 00:23:31.240 { 00:23:31.240 "name": "TLSTEST", 00:23:31.240 "trtype": "tcp", 00:23:31.240 "traddr": "10.0.0.2", 00:23:31.240 "adrfam": "ipv4", 00:23:31.240 "trsvcid": "4420", 00:23:31.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.240 "prchk_reftag": false, 00:23:31.240 "prchk_guard": false, 00:23:31.240 "hdgst": false, 00:23:31.240 "ddgst": false, 00:23:31.240 "psk": "key0", 00:23:31.240 "allow_unrecognized_csi": false, 00:23:31.240 "method": "bdev_nvme_attach_controller", 00:23:31.240 "req_id": 1 00:23:31.240 } 00:23:31.240 Got JSON-RPC error response 00:23:31.240 response: 00:23:31.240 { 00:23:31.240 "code": -126, 00:23:31.240 "message": "Required key not available" 00:23:31.240 } 00:23:31.240 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274652 00:23:31.240 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274652 ']' 00:23:31.240 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274652 00:23:31.240 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274652 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274652' 00:23:31.241 killing process with pid 274652 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274652 00:23:31.241 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.241 00:23:31.241 Latency(us) 00:23:31.241 [2024-10-14T11:34:23.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.241 [2024-10-14T11:34:23.094Z] =================================================================================================================== 00:23:31.241 [2024-10-14T11:34:23.094Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.241 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274652 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 270989 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 270989 ']' 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 270989 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 270989 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 270989' 00:23:31.499 killing process with pid 270989 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 270989 00:23:31.499 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 270989 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BLZfvXNnZ9 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BLZfvXNnZ9 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=274805 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 274805 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 274805 ']' 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.757 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.757 [2024-10-14 13:34:23.579661] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:31.757 [2024-10-14 13:34:23.579748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.016 [2024-10-14 13:34:23.646239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.016 [2024-10-14 13:34:23.693533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.016 [2024-10-14 13:34:23.693592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.016 [2024-10-14 13:34:23.693620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.016 [2024-10-14 13:34:23.693631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.016 [2024-10-14 13:34:23.693641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.016 [2024-10-14 13:34:23.694198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BLZfvXNnZ9 00:23:32.016 13:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.272 [2024-10-14 13:34:24.068816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.272 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:32.529 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:32.786 [2024-10-14 13:34:24.610254] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.786 [2024-10-14 13:34:24.610491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.786 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.045 malloc0 00:23:33.045 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.303 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BLZfvXNnZ9 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BLZfvXNnZ9 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275088 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275088 /var/tmp/bdevperf.sock 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 275088 ']' 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.869 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.127 [2024-10-14 13:34:25.737717] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:34.127 [2024-10-14 13:34:25.737791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275088 ] 00:23:34.127 [2024-10-14 13:34:25.797269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.127 [2024-10-14 13:34:25.845835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.127 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.127 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:34.127 13:34:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:34.385 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.642 [2024-10-14 13:34:26.477044] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.899 TLSTESTn1 00:23:34.899 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.899 Running I/O for 10 seconds... 00:23:37.205 3170.00 IOPS, 12.38 MiB/s [2024-10-14T11:34:29.991Z] 3268.00 IOPS, 12.77 MiB/s [2024-10-14T11:34:30.925Z] 3234.67 IOPS, 12.64 MiB/s [2024-10-14T11:34:31.859Z] 3287.25 IOPS, 12.84 MiB/s [2024-10-14T11:34:32.793Z] 3320.20 IOPS, 12.97 MiB/s [2024-10-14T11:34:33.727Z] 3281.50 IOPS, 12.82 MiB/s [2024-10-14T11:34:35.103Z] 3299.00 IOPS, 12.89 MiB/s [2024-10-14T11:34:36.035Z] 3290.00 IOPS, 12.85 MiB/s [2024-10-14T11:34:36.968Z] 3288.11 IOPS, 12.84 MiB/s [2024-10-14T11:34:36.968Z] 3306.90 IOPS, 12.92 MiB/s 00:23:45.115 Latency(us) 00:23:45.115 [2024-10-14T11:34:36.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.115 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.115 Verification LBA range: start 0x0 length 0x2000 00:23:45.115 TLSTESTn1 : 10.02 3311.74 12.94 0.00 0.00 38582.67 8883.77 37088.52 00:23:45.115 [2024-10-14T11:34:36.968Z] =================================================================================================================== 00:23:45.115 [2024-10-14T11:34:36.968Z] Total : 3311.74 12.94 0.00 0.00 38582.67 8883.77 37088.52 00:23:45.115 { 00:23:45.115 "results": [ 00:23:45.115 { 00:23:45.115 "job": "TLSTESTn1", 00:23:45.115 "core_mask": "0x4", 00:23:45.115 "workload": "verify", 00:23:45.115 "status": "finished", 00:23:45.115 "verify_range": { 00:23:45.115 "start": 0, 00:23:45.115 "length": 8192 00:23:45.115 }, 00:23:45.115 "queue_depth": 128, 00:23:45.115 "io_size": 4096, 00:23:45.115 "runtime": 10.02405, 00:23:45.115 "iops": 3311.7352766596337, 00:23:45.115 "mibps": 12.936465924451694, 00:23:45.115 "io_failed": 0, 00:23:45.115 "io_timeout": 0, 00:23:45.115 "avg_latency_us": 38582.66673717728, 00:23:45.115 "min_latency_us": 8883.76888888889, 00:23:45.115 "max_latency_us": 37088.52148148148 00:23:45.115 } 00:23:45.115 ], 00:23:45.115 "core_count": 1 00:23:45.115 } 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275088 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 275088 ']' 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 275088 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 275088 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 275088' 00:23:45.115 killing process with pid 275088 00:23:45.115 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 275088 00:23:45.115 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.115 00:23:45.116 Latency(us) 00:23:45.116 [2024-10-14T11:34:36.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.116 [2024-10-14T11:34:36.969Z] =================================================================================================================== 00:23:45.116 [2024-10-14T11:34:36.969Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 275088 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BLZfvXNnZ9 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BLZfvXNnZ9 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BLZfvXNnZ9 00:23:45.116 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BLZfvXNnZ9 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BLZfvXNnZ9 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=276406 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 276406 /var/tmp/bdevperf.sock 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 276406 ']' 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.374 13:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.374 [2024-10-14 13:34:37.018721] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:45.374 [2024-10-14 13:34:37.018798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276406 ] 00:23:45.374 [2024-10-14 13:34:37.076616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.374 [2024-10-14 13:34:37.119322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.632 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.632 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:45.632 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:45.889 [2024-10-14 13:34:37.489052] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BLZfvXNnZ9': 0100666 00:23:45.889 [2024-10-14 13:34:37.489096] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:45.889 request: 00:23:45.889 { 00:23:45.889 "name": "key0", 00:23:45.889 "path": "/tmp/tmp.BLZfvXNnZ9", 00:23:45.889 "method": "keyring_file_add_key", 00:23:45.889 "req_id": 1 00:23:45.889 } 00:23:45.889 Got JSON-RPC error response 00:23:45.889 response: 00:23:45.889 { 00:23:45.890 "code": -1, 00:23:45.890 "message": "Operation not permitted" 00:23:45.890 } 00:23:45.890 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.148 [2024-10-14 13:34:37.757878] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.148 [2024-10-14 13:34:37.757938] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:46.148 request: 00:23:46.148 { 00:23:46.148 "name": "TLSTEST", 00:23:46.148 "trtype": "tcp", 00:23:46.148 "traddr": "10.0.0.2", 00:23:46.148 "adrfam": "ipv4", 00:23:46.148 "trsvcid": "4420", 00:23:46.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.148 "prchk_reftag": false, 00:23:46.148 "prchk_guard": false, 00:23:46.148 "hdgst": false, 00:23:46.148 "ddgst": false, 00:23:46.148 "psk": "key0", 00:23:46.148 "allow_unrecognized_csi": false, 00:23:46.148 "method": "bdev_nvme_attach_controller", 00:23:46.148 "req_id": 1 00:23:46.148 } 00:23:46.148 Got JSON-RPC error response 00:23:46.148 response: 00:23:46.148 { 00:23:46.148 "code": -126, 00:23:46.148 "message": "Required key not available" 00:23:46.148 } 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 276406 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 276406 ']' 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 276406 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276406 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276406' 00:23:46.148 killing process with pid 276406 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 276406 00:23:46.148 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.148 00:23:46.148 Latency(us) 00:23:46.148 [2024-10-14T11:34:38.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.148 [2024-10-14T11:34:38.001Z] =================================================================================================================== 00:23:46.148 [2024-10-14T11:34:38.001Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 276406 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 274805 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 274805 ']' 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 274805 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.148 13:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 274805 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 274805' 00:23:46.405 killing process with pid 274805 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 274805 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 274805 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=276554 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 276554 00:23:46.405 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 276554 ']' 00:23:46.406 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.406 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.406 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.406 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.406 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.406 [2024-10-14 13:34:38.259601] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:46.406 [2024-10-14 13:34:38.259708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.663 [2024-10-14 13:34:38.324246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.663 [2024-10-14 13:34:38.369120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.663 [2024-10-14 13:34:38.369192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.663 [2024-10-14 13:34:38.369221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.663 [2024-10-14 13:34:38.369232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.663 [2024-10-14 13:34:38.369242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.663 [2024-10-14 13:34:38.369808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.663 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.663 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:46.663 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BLZfvXNnZ9 00:23:46.664 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.921 [2024-10-14 13:34:38.760862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.179 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.437 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.437 [2024-10-14 13:34:39.290344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.437 [2024-10-14 13:34:39.290573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.693 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.950 malloc0 00:23:47.950 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.208 13:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:48.466 [2024-10-14 13:34:40.092086] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BLZfvXNnZ9': 0100666 00:23:48.466 [2024-10-14 13:34:40.092155] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:48.466 request: 00:23:48.466 { 00:23:48.466 "name": "key0", 00:23:48.466 "path": "/tmp/tmp.BLZfvXNnZ9", 00:23:48.466 "method": "keyring_file_add_key", 00:23:48.466 "req_id": 1 00:23:48.466 } 00:23:48.466 Got JSON-RPC error response 00:23:48.466 response: 00:23:48.466 { 00:23:48.466 "code": -1, 00:23:48.466 "message": "Operation not permitted" 00:23:48.466 } 00:23:48.466 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.724 [2024-10-14 13:34:40.372911] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:48.724 [2024-10-14 13:34:40.372982] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:48.724 request: 00:23:48.724 { 00:23:48.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.724 "host": "nqn.2016-06.io.spdk:host1", 00:23:48.724 "psk": "key0", 00:23:48.724 "method": "nvmf_subsystem_add_host", 00:23:48.724 "req_id": 1 00:23:48.724 } 00:23:48.724 Got JSON-RPC error response 00:23:48.724 response: 00:23:48.724 { 00:23:48.724 "code": -32603, 00:23:48.724 "message": "Internal error" 00:23:48.724 } 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 276554 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 276554 ']' 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 276554 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276554 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276554' 00:23:48.724 killing process with pid 276554 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 276554 00:23:48.724 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 276554 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BLZfvXNnZ9 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=276852 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 276852 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 276852 ']' 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.982 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.982 [2024-10-14 13:34:40.711097] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:48.982 [2024-10-14 13:34:40.711203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.982 [2024-10-14 13:34:40.775292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.983 [2024-10-14 13:34:40.821017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.983 [2024-10-14 13:34:40.821075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.983 [2024-10-14 13:34:40.821104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.983 [2024-10-14 13:34:40.821116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.983 [2024-10-14 13:34:40.821126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.983 [2024-10-14 13:34:40.821719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BLZfvXNnZ9 00:23:49.241 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:49.499 [2024-10-14 13:34:41.204604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.499 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:49.756 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:50.014 [2024-10-14 13:34:41.738055] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.014 [2024-10-14 13:34:41.738312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.014 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:50.272 malloc0 00:23:50.272 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:50.529 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:50.787 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.044 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=277137 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 277137 /var/tmp/bdevperf.sock 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277137 ']' 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.045 13:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.045 [2024-10-14 13:34:42.853548] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:51.045 [2024-10-14 13:34:42.853622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277137 ] 00:23:51.302 [2024-10-14 13:34:42.912843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.302 [2024-10-14 13:34:42.958573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.302 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.302 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.302 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:23:51.560 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.817 [2024-10-14 13:34:43.600526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.075 TLSTESTn1 00:23:52.075 13:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:52.334 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:52.334 "subsystems": [ 00:23:52.334 { 00:23:52.334 "subsystem": "keyring", 00:23:52.334 "config": [ 00:23:52.334 { 00:23:52.334 "method": "keyring_file_add_key", 00:23:52.334 "params": { 00:23:52.334 "name": "key0", 00:23:52.334 "path": "/tmp/tmp.BLZfvXNnZ9" 00:23:52.334 } 00:23:52.334 } 00:23:52.334 ] 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "subsystem": "iobuf", 00:23:52.334 "config": [ 00:23:52.334 { 00:23:52.334 "method": "iobuf_set_options", 00:23:52.334 "params": { 00:23:52.334 "small_pool_count": 8192, 00:23:52.334 "large_pool_count": 1024, 00:23:52.334 "small_bufsize": 8192, 00:23:52.334 "large_bufsize": 135168 00:23:52.334 } 00:23:52.334 } 00:23:52.334 ] 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "subsystem": "sock", 00:23:52.334 "config": [ 00:23:52.334 { 00:23:52.334 "method": "sock_set_default_impl", 00:23:52.334 "params": { 00:23:52.334 "impl_name": "posix" 00:23:52.334 } 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "method": "sock_impl_set_options", 00:23:52.334 "params": { 00:23:52.334 "impl_name": "ssl", 00:23:52.334 "recv_buf_size": 4096, 00:23:52.334 "send_buf_size": 4096, 00:23:52.334 "enable_recv_pipe": true, 00:23:52.334 "enable_quickack": false, 00:23:52.334 "enable_placement_id": 0, 00:23:52.334 "enable_zerocopy_send_server": true, 00:23:52.334 "enable_zerocopy_send_client": false, 00:23:52.334 "zerocopy_threshold": 0, 00:23:52.334 "tls_version": 0, 00:23:52.334 "enable_ktls": false 00:23:52.334 } 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "method": "sock_impl_set_options", 00:23:52.334 "params": { 00:23:52.334 "impl_name": "posix", 00:23:52.334 "recv_buf_size": 2097152, 00:23:52.334 "send_buf_size": 2097152, 00:23:52.334 "enable_recv_pipe": true, 00:23:52.334 "enable_quickack": false, 00:23:52.334 "enable_placement_id": 0, 00:23:52.334 "enable_zerocopy_send_server": true, 00:23:52.334 "enable_zerocopy_send_client": false, 00:23:52.334 "zerocopy_threshold": 0, 00:23:52.334 "tls_version": 0, 00:23:52.334 "enable_ktls": false 00:23:52.334 } 00:23:52.334 } 00:23:52.334 ] 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "subsystem": "vmd", 00:23:52.334 "config": [] 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "subsystem": "accel", 00:23:52.334 "config": [ 00:23:52.334 { 00:23:52.334 "method": "accel_set_options", 00:23:52.334 "params": { 00:23:52.334 "small_cache_size": 128, 00:23:52.334 "large_cache_size": 16, 00:23:52.334 "task_count": 2048, 00:23:52.334 "sequence_count": 2048, 00:23:52.334 "buf_count": 2048 00:23:52.334 } 00:23:52.334 } 00:23:52.334 ] 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "subsystem": "bdev", 00:23:52.334 "config": [ 00:23:52.334 { 00:23:52.334 "method": "bdev_set_options", 00:23:52.334 "params": { 00:23:52.334 "bdev_io_pool_size": 65535, 00:23:52.334 "bdev_io_cache_size": 256, 00:23:52.334 "bdev_auto_examine": true, 00:23:52.334 "iobuf_small_cache_size": 128, 00:23:52.334 "iobuf_large_cache_size": 16 00:23:52.334 } 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "method": "bdev_raid_set_options", 00:23:52.334 "params": { 00:23:52.334 "process_window_size_kb": 1024, 00:23:52.334 "process_max_bandwidth_mb_sec": 0 00:23:52.334 } 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "method": "bdev_iscsi_set_options", 00:23:52.334 "params": { 00:23:52.334 "timeout_sec": 30 00:23:52.334 } 00:23:52.334 }, 00:23:52.334 { 00:23:52.334 "method": "bdev_nvme_set_options", 00:23:52.334 "params": { 00:23:52.334 "action_on_timeout": "none", 00:23:52.334 "timeout_us": 0, 00:23:52.334 "timeout_admin_us": 0, 00:23:52.334 "keep_alive_timeout_ms": 10000, 00:23:52.334 "arbitration_burst": 0, 00:23:52.334 "low_priority_weight": 0, 00:23:52.334 "medium_priority_weight": 0, 00:23:52.334 "high_priority_weight": 0, 00:23:52.335 "nvme_adminq_poll_period_us": 10000, 00:23:52.335 "nvme_ioq_poll_period_us": 0, 00:23:52.335 "io_queue_requests": 0, 00:23:52.335 "delay_cmd_submit": true, 00:23:52.335 "transport_retry_count": 4, 00:23:52.335 "bdev_retry_count": 3, 00:23:52.335 "transport_ack_timeout": 0, 00:23:52.335 "ctrlr_loss_timeout_sec": 0, 00:23:52.335 "reconnect_delay_sec": 0, 00:23:52.335 "fast_io_fail_timeout_sec": 0, 00:23:52.335 "disable_auto_failback": false, 00:23:52.335 "generate_uuids": false, 00:23:52.335 "transport_tos": 0, 00:23:52.335 "nvme_error_stat": false, 00:23:52.335 "rdma_srq_size": 0, 00:23:52.335 "io_path_stat": false, 00:23:52.335 "allow_accel_sequence": false, 00:23:52.335 "rdma_max_cq_size": 0, 00:23:52.335 "rdma_cm_event_timeout_ms": 0, 00:23:52.335 "dhchap_digests": [ 00:23:52.335 "sha256", 00:23:52.335 "sha384", 00:23:52.335 "sha512" 00:23:52.335 ], 00:23:52.335 "dhchap_dhgroups": [ 00:23:52.335 "null", 00:23:52.335 "ffdhe2048", 00:23:52.335 "ffdhe3072", 00:23:52.335 "ffdhe4096", 00:23:52.335 "ffdhe6144", 00:23:52.335 "ffdhe8192" 00:23:52.335 ] 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "bdev_nvme_set_hotplug", 00:23:52.335 "params": { 00:23:52.335 "period_us": 100000, 00:23:52.335 "enable": false 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "bdev_malloc_create", 00:23:52.335 "params": { 00:23:52.335 "name": "malloc0", 00:23:52.335 "num_blocks": 8192, 00:23:52.335 "block_size": 4096, 00:23:52.335 "physical_block_size": 4096, 00:23:52.335 "uuid": "0512f2f9-6693-407c-a843-0e060b453d99", 00:23:52.335 "optimal_io_boundary": 0, 00:23:52.335 "md_size": 0, 00:23:52.335 "dif_type": 0, 00:23:52.335 "dif_is_head_of_md": false, 00:23:52.335 "dif_pi_format": 0 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "bdev_wait_for_examine" 00:23:52.335 } 00:23:52.335 ] 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "subsystem": "nbd", 00:23:52.335 "config": [] 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "subsystem": "scheduler", 00:23:52.335 "config": [ 00:23:52.335 { 00:23:52.335 "method": "framework_set_scheduler", 00:23:52.335 "params": { 00:23:52.335 "name": "static" 00:23:52.335 } 00:23:52.335 } 00:23:52.335 ] 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "subsystem": "nvmf", 00:23:52.335 "config": [ 00:23:52.335 { 00:23:52.335 "method": "nvmf_set_config", 00:23:52.335 "params": { 00:23:52.335 "discovery_filter": "match_any", 00:23:52.335 "admin_cmd_passthru": { 00:23:52.335 "identify_ctrlr": false 00:23:52.335 }, 00:23:52.335 "dhchap_digests": [ 00:23:52.335 "sha256", 00:23:52.335 "sha384", 00:23:52.335 "sha512" 00:23:52.335 ], 00:23:52.335 "dhchap_dhgroups": [ 00:23:52.335 "null", 00:23:52.335 "ffdhe2048", 00:23:52.335 "ffdhe3072", 00:23:52.335 "ffdhe4096", 00:23:52.335 "ffdhe6144", 00:23:52.335 "ffdhe8192" 00:23:52.335 ] 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_set_max_subsystems", 00:23:52.335 "params": { 00:23:52.335 "max_subsystems": 1024 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_set_crdt", 00:23:52.335 "params": { 00:23:52.335 "crdt1": 0, 00:23:52.335 "crdt2": 0, 00:23:52.335 "crdt3": 0 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_create_transport", 00:23:52.335 "params": { 00:23:52.335 "trtype": "TCP", 00:23:52.335 "max_queue_depth": 128, 00:23:52.335 "max_io_qpairs_per_ctrlr": 127, 00:23:52.335 "in_capsule_data_size": 4096, 00:23:52.335 "max_io_size": 131072, 00:23:52.335 "io_unit_size": 131072, 00:23:52.335 "max_aq_depth": 128, 00:23:52.335 "num_shared_buffers": 511, 00:23:52.335 "buf_cache_size": 4294967295, 00:23:52.335 "dif_insert_or_strip": false, 00:23:52.335 "zcopy": false, 00:23:52.335 "c2h_success": false, 00:23:52.335 "sock_priority": 0, 00:23:52.335 "abort_timeout_sec": 1, 00:23:52.335 "ack_timeout": 0, 00:23:52.335 "data_wr_pool_size": 0 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_create_subsystem", 00:23:52.335 "params": { 00:23:52.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.335 "allow_any_host": false, 00:23:52.335 "serial_number": "SPDK00000000000001", 00:23:52.335 "model_number": "SPDK bdev Controller", 00:23:52.335 "max_namespaces": 10, 00:23:52.335 "min_cntlid": 1, 00:23:52.335 "max_cntlid": 65519, 00:23:52.335 "ana_reporting": false 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_subsystem_add_host", 00:23:52.335 "params": { 00:23:52.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.335 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.335 "psk": "key0" 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_subsystem_add_ns", 00:23:52.335 "params": { 00:23:52.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.335 "namespace": { 00:23:52.335 "nsid": 1, 00:23:52.335 "bdev_name": "malloc0", 00:23:52.335 "nguid": "0512F2F96693407CA8430E060B453D99", 00:23:52.335 "uuid": "0512f2f9-6693-407c-a843-0e060b453d99", 00:23:52.335 "no_auto_visible": false 00:23:52.335 } 00:23:52.335 } 00:23:52.335 }, 00:23:52.335 { 00:23:52.335 "method": "nvmf_subsystem_add_listener", 00:23:52.335 "params": { 00:23:52.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.335 "listen_address": { 00:23:52.335 "trtype": "TCP", 00:23:52.335 "adrfam": "IPv4", 00:23:52.335 "traddr": "10.0.0.2", 00:23:52.335 "trsvcid": "4420" 00:23:52.335 }, 00:23:52.335 "secure_channel": true 00:23:52.335 } 00:23:52.335 } 00:23:52.335 ] 00:23:52.335 } 00:23:52.335 ] 00:23:52.335 }' 00:23:52.335 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:52.595 "subsystems": [ 00:23:52.595 { 00:23:52.595 "subsystem": "keyring", 00:23:52.595 "config": [ 00:23:52.595 { 00:23:52.595 "method": "keyring_file_add_key", 00:23:52.595 "params": { 00:23:52.595 "name": "key0", 00:23:52.595 "path": "/tmp/tmp.BLZfvXNnZ9" 00:23:52.595 } 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "iobuf", 00:23:52.595 "config": [ 00:23:52.595 { 00:23:52.595 "method": "iobuf_set_options", 00:23:52.595 "params": { 00:23:52.595 "small_pool_count": 8192, 00:23:52.595 "large_pool_count": 1024, 00:23:52.595 "small_bufsize": 8192, 00:23:52.595 "large_bufsize": 135168 00:23:52.595 } 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "sock", 00:23:52.595 "config": [ 00:23:52.595 { 00:23:52.595 "method": "sock_set_default_impl", 00:23:52.595 "params": { 00:23:52.595 "impl_name": "posix" 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "sock_impl_set_options", 00:23:52.595 "params": { 00:23:52.595 "impl_name": "ssl", 00:23:52.595 "recv_buf_size": 4096, 00:23:52.595 "send_buf_size": 4096, 00:23:52.595 "enable_recv_pipe": true, 00:23:52.595 "enable_quickack": false, 00:23:52.595 "enable_placement_id": 0, 00:23:52.595 "enable_zerocopy_send_server": true, 00:23:52.595 "enable_zerocopy_send_client": false, 00:23:52.595 "zerocopy_threshold": 0, 00:23:52.595 "tls_version": 0, 00:23:52.595 "enable_ktls": false 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "sock_impl_set_options", 00:23:52.595 "params": { 00:23:52.595 "impl_name": "posix", 00:23:52.595 "recv_buf_size": 2097152, 00:23:52.595 "send_buf_size": 2097152, 00:23:52.595 "enable_recv_pipe": true, 00:23:52.595 "enable_quickack": false, 00:23:52.595 "enable_placement_id": 0, 00:23:52.595 "enable_zerocopy_send_server": true, 00:23:52.595 "enable_zerocopy_send_client": false, 00:23:52.595 "zerocopy_threshold": 0, 00:23:52.595 "tls_version": 0, 00:23:52.595 "enable_ktls": false 00:23:52.595 } 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "vmd", 00:23:52.595 "config": [] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "accel", 00:23:52.595 "config": [ 00:23:52.595 { 00:23:52.595 "method": "accel_set_options", 00:23:52.595 "params": { 00:23:52.595 "small_cache_size": 128, 00:23:52.595 "large_cache_size": 16, 00:23:52.595 "task_count": 2048, 00:23:52.595 "sequence_count": 2048, 00:23:52.595 "buf_count": 2048 00:23:52.595 } 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "bdev", 00:23:52.595 "config": [ 00:23:52.595 { 00:23:52.595 "method": "bdev_set_options", 00:23:52.595 "params": { 00:23:52.595 "bdev_io_pool_size": 65535, 00:23:52.595 "bdev_io_cache_size": 256, 00:23:52.595 "bdev_auto_examine": true, 00:23:52.595 "iobuf_small_cache_size": 128, 00:23:52.595 "iobuf_large_cache_size": 16 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_raid_set_options", 00:23:52.595 "params": { 00:23:52.595 "process_window_size_kb": 1024, 00:23:52.595 "process_max_bandwidth_mb_sec": 0 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_iscsi_set_options", 00:23:52.595 "params": { 00:23:52.595 "timeout_sec": 30 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_nvme_set_options", 00:23:52.595 "params": { 00:23:52.595 "action_on_timeout": "none", 00:23:52.595 "timeout_us": 0, 00:23:52.595 "timeout_admin_us": 0, 00:23:52.595 "keep_alive_timeout_ms": 10000, 00:23:52.595 "arbitration_burst": 0, 00:23:52.595 "low_priority_weight": 0, 00:23:52.595 "medium_priority_weight": 0, 00:23:52.595 "high_priority_weight": 0, 00:23:52.595 "nvme_adminq_poll_period_us": 10000, 00:23:52.595 "nvme_ioq_poll_period_us": 0, 00:23:52.595 "io_queue_requests": 512, 00:23:52.595 "delay_cmd_submit": true, 00:23:52.595 "transport_retry_count": 4, 00:23:52.595 "bdev_retry_count": 3, 00:23:52.595 "transport_ack_timeout": 0, 00:23:52.595 "ctrlr_loss_timeout_sec": 0, 00:23:52.595 "reconnect_delay_sec": 0, 00:23:52.595 "fast_io_fail_timeout_sec": 0, 00:23:52.595 "disable_auto_failback": false, 00:23:52.595 "generate_uuids": false, 00:23:52.595 "transport_tos": 0, 00:23:52.595 "nvme_error_stat": false, 00:23:52.595 "rdma_srq_size": 0, 00:23:52.595 "io_path_stat": false, 00:23:52.595 "allow_accel_sequence": false, 00:23:52.595 "rdma_max_cq_size": 0, 00:23:52.595 "rdma_cm_event_timeout_ms": 0, 00:23:52.595 "dhchap_digests": [ 00:23:52.595 "sha256", 00:23:52.595 "sha384", 00:23:52.595 "sha512" 00:23:52.595 ], 00:23:52.595 "dhchap_dhgroups": [ 00:23:52.595 "null", 00:23:52.595 "ffdhe2048", 00:23:52.595 "ffdhe3072", 00:23:52.595 "ffdhe4096", 00:23:52.595 "ffdhe6144", 00:23:52.595 "ffdhe8192" 00:23:52.595 ] 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_nvme_attach_controller", 00:23:52.595 "params": { 00:23:52.595 "name": "TLSTEST", 00:23:52.595 "trtype": "TCP", 00:23:52.595 "adrfam": "IPv4", 00:23:52.595 "traddr": "10.0.0.2", 00:23:52.595 "trsvcid": "4420", 00:23:52.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.595 "prchk_reftag": false, 00:23:52.595 "prchk_guard": false, 00:23:52.595 "ctrlr_loss_timeout_sec": 0, 00:23:52.595 "reconnect_delay_sec": 0, 00:23:52.595 "fast_io_fail_timeout_sec": 0, 00:23:52.595 "psk": "key0", 00:23:52.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.595 "hdgst": false, 00:23:52.595 "ddgst": false, 00:23:52.595 "multipath": "multipath" 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_nvme_set_hotplug", 00:23:52.595 "params": { 00:23:52.595 "period_us": 100000, 00:23:52.595 "enable": false 00:23:52.595 } 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "method": "bdev_wait_for_examine" 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }, 00:23:52.595 { 00:23:52.595 "subsystem": "nbd", 00:23:52.595 "config": [] 00:23:52.595 } 00:23:52.595 ] 00:23:52.595 }' 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 277137 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277137 ']' 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277137 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277137 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277137' 00:23:52.595 killing process with pid 277137 00:23:52.595 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277137 00:23:52.595 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.595 00:23:52.595 Latency(us) 00:23:52.595 [2024-10-14T11:34:44.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.596 [2024-10-14T11:34:44.449Z] =================================================================================================================== 00:23:52.596 [2024-10-14T11:34:44.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.596 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277137 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 276852 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 276852 ']' 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 276852 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 276852 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 276852' 00:23:52.854 killing process with pid 276852 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 276852 00:23:52.854 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 276852 00:23:53.113 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:53.113 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:53.113 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:53.113 "subsystems": [ 00:23:53.113 { 00:23:53.113 "subsystem": "keyring", 00:23:53.113 "config": [ 00:23:53.113 { 00:23:53.113 "method": "keyring_file_add_key", 00:23:53.113 "params": { 00:23:53.113 "name": "key0", 00:23:53.113 "path": "/tmp/tmp.BLZfvXNnZ9" 00:23:53.113 } 00:23:53.113 } 00:23:53.113 ] 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "subsystem": "iobuf", 00:23:53.113 "config": [ 00:23:53.113 { 00:23:53.113 "method": "iobuf_set_options", 00:23:53.113 "params": { 00:23:53.113 "small_pool_count": 8192, 00:23:53.113 "large_pool_count": 1024, 00:23:53.113 "small_bufsize": 8192, 00:23:53.113 "large_bufsize": 135168 00:23:53.113 } 00:23:53.113 } 00:23:53.113 ] 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "subsystem": "sock", 00:23:53.113 "config": [ 00:23:53.113 { 00:23:53.113 "method": "sock_set_default_impl", 00:23:53.113 "params": { 00:23:53.113 "impl_name": "posix" 00:23:53.113 } 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "method": "sock_impl_set_options", 00:23:53.113 "params": { 00:23:53.113 "impl_name": "ssl", 00:23:53.113 "recv_buf_size": 4096, 00:23:53.113 "send_buf_size": 4096, 00:23:53.113 "enable_recv_pipe": true, 00:23:53.113 "enable_quickack": false, 00:23:53.113 "enable_placement_id": 0, 00:23:53.113 "enable_zerocopy_send_server": true, 00:23:53.113 "enable_zerocopy_send_client": false, 00:23:53.113 "zerocopy_threshold": 0, 00:23:53.113 "tls_version": 0, 00:23:53.113 "enable_ktls": false 00:23:53.113 } 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "method": "sock_impl_set_options", 00:23:53.113 "params": { 00:23:53.113 "impl_name": "posix", 00:23:53.113 "recv_buf_size": 2097152, 00:23:53.113 "send_buf_size": 2097152, 00:23:53.113 "enable_recv_pipe": true, 00:23:53.113 "enable_quickack": false, 00:23:53.113 "enable_placement_id": 0, 00:23:53.113 "enable_zerocopy_send_server": true, 00:23:53.113 "enable_zerocopy_send_client": false, 00:23:53.113 "zerocopy_threshold": 0, 00:23:53.113 "tls_version": 0, 00:23:53.113 "enable_ktls": false 00:23:53.113 } 00:23:53.113 } 00:23:53.113 ] 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "subsystem": "vmd", 00:23:53.113 "config": [] 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "subsystem": "accel", 00:23:53.113 "config": [ 00:23:53.113 { 00:23:53.113 "method": "accel_set_options", 00:23:53.113 "params": { 00:23:53.113 "small_cache_size": 128, 00:23:53.113 "large_cache_size": 16, 00:23:53.113 "task_count": 2048, 00:23:53.113 "sequence_count": 2048, 00:23:53.113 "buf_count": 2048 00:23:53.113 } 00:23:53.113 } 00:23:53.113 ] 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "subsystem": "bdev", 00:23:53.113 "config": [ 00:23:53.113 { 00:23:53.113 "method": "bdev_set_options", 00:23:53.113 "params": { 00:23:53.113 "bdev_io_pool_size": 65535, 00:23:53.113 "bdev_io_cache_size": 256, 00:23:53.113 "bdev_auto_examine": true, 00:23:53.113 "iobuf_small_cache_size": 128, 00:23:53.113 "iobuf_large_cache_size": 16 00:23:53.113 } 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "method": "bdev_raid_set_options", 00:23:53.113 "params": { 00:23:53.113 "process_window_size_kb": 1024, 00:23:53.113 "process_max_bandwidth_mb_sec": 0 00:23:53.113 } 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "method": "bdev_iscsi_set_options", 00:23:53.113 "params": { 00:23:53.113 "timeout_sec": 30 00:23:53.113 } 00:23:53.113 }, 00:23:53.113 { 00:23:53.113 "method": "bdev_nvme_set_options", 00:23:53.113 "params": { 00:23:53.113 "action_on_timeout": "none", 00:23:53.113 "timeout_us": 0, 00:23:53.113 "timeout_admin_us": 0, 00:23:53.113 "keep_alive_timeout_ms": 10000, 00:23:53.113 "arbitration_burst": 0, 00:23:53.113 "low_priority_weight": 0, 00:23:53.113 "medium_priority_weight": 0, 00:23:53.113 "high_priority_weight": 0, 00:23:53.113 "nvme_adminq_poll_period_us": 10000, 00:23:53.113 "nvme_ioq_poll_period_us": 0, 00:23:53.113 "io_queue_requests": 0, 00:23:53.113 "delay_cmd_submit": true, 00:23:53.113 "transport_retry_count": 4, 00:23:53.113 "bdev_retry_count": 3, 00:23:53.113 "transport_ack_timeout": 0, 00:23:53.113 "ctrlr_loss_timeout_sec": 0, 00:23:53.113 "reconnect_delay_sec": 0, 00:23:53.113 "fast_io_fail_timeout_sec": 0, 00:23:53.113 "disable_auto_failback": false, 00:23:53.113 "generate_uuids": false, 00:23:53.113 "transport_tos": 0, 00:23:53.113 "nvme_error_stat": false, 00:23:53.113 "rdma_srq_size": 0, 00:23:53.113 "io_path_stat": false, 00:23:53.113 "allow_accel_sequence": false, 00:23:53.113 "rdma_max_cq_size": 0, 00:23:53.114 "rdma_cm_event_timeout_ms": 0, 00:23:53.114 "dhchap_digests": [ 00:23:53.114 "sha256", 00:23:53.114 "sha384", 00:23:53.114 "sha512" 00:23:53.114 ], 00:23:53.114 "dhchap_dhgroups": [ 00:23:53.114 "null", 00:23:53.114 "ffdhe2048", 00:23:53.114 "ffdhe3072", 00:23:53.114 "ffdhe4096", 00:23:53.114 "ffdhe6144", 00:23:53.114 "ffdhe8192" 00:23:53.114 ] 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "bdev_nvme_set_hotplug", 00:23:53.114 "params": { 00:23:53.114 "period_us": 100000, 00:23:53.114 "enable": false 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "bdev_malloc_create", 00:23:53.114 "params": { 00:23:53.114 "name": "malloc0", 00:23:53.114 "num_blocks": 8192, 00:23:53.114 "block_size": 4096, 00:23:53.114 "physical_block_size": 4096, 00:23:53.114 "uuid": "0512f2f9-6693-407c-a843-0e060b453d99", 00:23:53.114 "optimal_io_boundary": 0, 00:23:53.114 "md_size": 0, 00:23:53.114 "dif_type": 0, 00:23:53.114 "dif_is_head_of_md": false, 00:23:53.114 "dif_pi_format": 0 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "bdev_wait_for_examine" 00:23:53.114 } 00:23:53.114 ] 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "subsystem": "nbd", 00:23:53.114 "config": [] 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "subsystem": "scheduler", 00:23:53.114 "config": [ 00:23:53.114 { 00:23:53.114 "method": "framework_set_scheduler", 00:23:53.114 "params": { 00:23:53.114 "name": "static" 00:23:53.114 } 00:23:53.114 } 00:23:53.114 ] 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "subsystem": "nvmf", 00:23:53.114 "config": [ 00:23:53.114 { 00:23:53.114 "method": "nvmf_set_config", 00:23:53.114 "params": { 00:23:53.114 "discovery_filter": "match_any", 00:23:53.114 "admin_cmd_passthru": { 00:23:53.114 "identify_ctrlr": false 00:23:53.114 }, 00:23:53.114 "dhchap_digests": [ 00:23:53.114 "sha256", 00:23:53.114 "sha384", 00:23:53.114 "sha512" 00:23:53.114 ], 00:23:53.114 "dhchap_dhgroups": [ 00:23:53.114 "null", 00:23:53.114 "ffdhe2048", 00:23:53.114 "ffdhe3072", 00:23:53.114 "ffdhe4096", 00:23:53.114 "ffdhe6144", 00:23:53.114 "ffdhe8192" 00:23:53.114 ] 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_set_max_subsystems", 00:23:53.114 "params": { 00:23:53.114 "max_subsystems": 1024 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_set_crdt", 00:23:53.114 "params": { 00:23:53.114 "crdt1": 0, 00:23:53.114 "crdt2": 0, 00:23:53.114 "crdt3": 0 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_create_transport", 00:23:53.114 "params": { 00:23:53.114 "trtype": "TCP", 00:23:53.114 "max_queue_depth": 128, 00:23:53.114 "max_io_qpairs_per_ctrlr": 127, 00:23:53.114 "in_capsule_data_size": 4096, 00:23:53.114 "max_io_size": 131072, 00:23:53.114 "io_unit_size": 131072, 00:23:53.114 "max_aq_depth": 128, 00:23:53.114 "num_shared_buffers": 511, 00:23:53.114 "buf_cache_size": 4294967295, 00:23:53.114 "dif_insert_or_strip": false, 00:23:53.114 "zcopy": false, 00:23:53.114 "c2h_success": false, 00:23:53.114 "sock_priority": 0, 00:23:53.114 "abort_timeout_sec": 1, 00:23:53.114 "ack_timeout": 0, 00:23:53.114 "data_wr_pool_size": 0 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_create_subsystem", 00:23:53.114 "params": { 00:23:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.114 "allow_any_host": false, 00:23:53.114 "serial_number": "SPDK00000000000001", 00:23:53.114 "model_number": "SPDK bdev Controller", 00:23:53.114 "max_namespaces": 10, 00:23:53.114 "min_cntlid": 1, 00:23:53.114 "max_cntlid": 65519, 00:23:53.114 "ana_reporting": false 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_subsystem_add_host", 00:23:53.114 "params": { 00:23:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.114 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.114 "psk": "key0" 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_subsystem_add_ns", 00:23:53.114 "params": { 00:23:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.114 "namespace": { 00:23:53.114 "nsid": 1, 00:23:53.114 "bdev_name": "malloc0", 00:23:53.114 "nguid": "0512F2F96693407CA8430E060B453D99", 00:23:53.114 "uuid": "0512f2f9-6693-407c-a843-0e060b453d99", 00:23:53.114 "no_auto_visible": false 00:23:53.114 } 00:23:53.114 } 00:23:53.114 }, 00:23:53.114 { 00:23:53.114 "method": "nvmf_subsystem_add_listener", 00:23:53.114 "params": { 00:23:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.114 "listen_address": { 00:23:53.114 "trtype": "TCP", 00:23:53.114 "adrfam": "IPv4", 00:23:53.114 "traddr": "10.0.0.2", 00:23:53.114 "trsvcid": "4420" 00:23:53.114 }, 00:23:53.114 "secure_channel": true 00:23:53.114 } 00:23:53.114 } 00:23:53.114 ] 00:23:53.114 } 00:23:53.114 ] 00:23:53.114 }' 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=277420 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 277420 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277420 ']' 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.114 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.114 [2024-10-14 13:34:44.870914] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:53.114 [2024-10-14 13:34:44.870989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.114 [2024-10-14 13:34:44.933772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.372 [2024-10-14 13:34:44.979863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.372 [2024-10-14 13:34:44.979909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.372 [2024-10-14 13:34:44.979937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.372 [2024-10-14 13:34:44.979948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.372 [2024-10-14 13:34:44.979957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.372 [2024-10-14 13:34:44.980583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.372 [2024-10-14 13:34:45.207773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.629 [2024-10-14 13:34:45.239807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.629 [2024-10-14 13:34:45.240074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=277568 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 277568 /var/tmp/bdevperf.sock 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 277568 ']' 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.195 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:54.195 "subsystems": [ 00:23:54.195 { 00:23:54.195 "subsystem": "keyring", 00:23:54.195 "config": [ 00:23:54.195 { 00:23:54.195 "method": "keyring_file_add_key", 00:23:54.195 "params": { 00:23:54.195 "name": "key0", 00:23:54.195 "path": "/tmp/tmp.BLZfvXNnZ9" 00:23:54.195 } 00:23:54.195 } 00:23:54.195 ] 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "subsystem": "iobuf", 00:23:54.195 "config": [ 00:23:54.195 { 00:23:54.195 "method": "iobuf_set_options", 00:23:54.195 "params": { 00:23:54.195 "small_pool_count": 8192, 00:23:54.195 "large_pool_count": 1024, 00:23:54.195 "small_bufsize": 8192, 00:23:54.195 "large_bufsize": 135168 00:23:54.195 } 00:23:54.195 } 00:23:54.195 ] 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "subsystem": "sock", 00:23:54.195 "config": [ 00:23:54.195 { 00:23:54.195 "method": "sock_set_default_impl", 00:23:54.195 "params": { 00:23:54.195 "impl_name": "posix" 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "sock_impl_set_options", 00:23:54.195 "params": { 00:23:54.195 "impl_name": "ssl", 00:23:54.195 "recv_buf_size": 4096, 00:23:54.195 "send_buf_size": 4096, 00:23:54.195 "enable_recv_pipe": true, 00:23:54.195 "enable_quickack": false, 00:23:54.195 "enable_placement_id": 0, 00:23:54.195 "enable_zerocopy_send_server": true, 00:23:54.195 "enable_zerocopy_send_client": false, 00:23:54.195 "zerocopy_threshold": 0, 00:23:54.195 "tls_version": 0, 00:23:54.195 "enable_ktls": false 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "sock_impl_set_options", 00:23:54.195 "params": { 00:23:54.195 "impl_name": "posix", 00:23:54.195 "recv_buf_size": 2097152, 00:23:54.195 "send_buf_size": 2097152, 00:23:54.195 "enable_recv_pipe": true, 00:23:54.195 "enable_quickack": false, 00:23:54.195 "enable_placement_id": 0, 00:23:54.195 "enable_zerocopy_send_server": true, 00:23:54.195 "enable_zerocopy_send_client": false, 00:23:54.195 "zerocopy_threshold": 0, 00:23:54.195 "tls_version": 0, 00:23:54.195 "enable_ktls": false 00:23:54.195 } 00:23:54.195 } 00:23:54.195 ] 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "subsystem": "vmd", 00:23:54.195 "config": [] 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "subsystem": "accel", 00:23:54.195 "config": [ 00:23:54.195 { 00:23:54.195 "method": "accel_set_options", 00:23:54.195 "params": { 00:23:54.195 "small_cache_size": 128, 00:23:54.195 "large_cache_size": 16, 00:23:54.195 "task_count": 2048, 00:23:54.195 "sequence_count": 2048, 00:23:54.195 "buf_count": 2048 00:23:54.195 } 00:23:54.195 } 00:23:54.195 ] 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "subsystem": "bdev", 00:23:54.195 "config": [ 00:23:54.195 { 00:23:54.195 "method": "bdev_set_options", 00:23:54.195 "params": { 00:23:54.195 "bdev_io_pool_size": 65535, 00:23:54.195 "bdev_io_cache_size": 256, 00:23:54.195 "bdev_auto_examine": true, 00:23:54.195 "iobuf_small_cache_size": 128, 00:23:54.195 "iobuf_large_cache_size": 16 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_raid_set_options", 00:23:54.195 "params": { 00:23:54.195 "process_window_size_kb": 1024, 00:23:54.195 "process_max_bandwidth_mb_sec": 0 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_iscsi_set_options", 00:23:54.195 "params": { 00:23:54.195 "timeout_sec": 30 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_nvme_set_options", 00:23:54.195 "params": { 00:23:54.195 "action_on_timeout": "none", 00:23:54.195 "timeout_us": 0, 00:23:54.195 "timeout_admin_us": 0, 00:23:54.195 "keep_alive_timeout_ms": 10000, 00:23:54.195 "arbitration_burst": 0, 00:23:54.195 "low_priority_weight": 0, 00:23:54.195 "medium_priority_weight": 0, 00:23:54.195 "high_priority_weight": 0, 00:23:54.195 "nvme_adminq_poll_period_us": 10000, 00:23:54.195 "nvme_ioq_poll_period_us": 0, 00:23:54.195 "io_queue_requests": 512, 00:23:54.195 "delay_cmd_submit": true, 00:23:54.195 "transport_retry_count": 4, 00:23:54.195 "bdev_retry_count": 3, 00:23:54.195 "transport_ack_timeout": 0, 00:23:54.195 "ctrlr_loss_timeout_sec": 0, 00:23:54.195 "reconnect_delay_sec": 0, 00:23:54.195 "fast_io_fail_timeout_sec": 0, 00:23:54.195 "disable_auto_failback": false, 00:23:54.195 "generate_uuids": false, 00:23:54.195 "transport_tos": 0, 00:23:54.195 "nvme_error_stat": false, 00:23:54.195 "rdma_srq_size": 0, 00:23:54.195 "io_path_stat": false, 00:23:54.195 "allow_accel_sequence": false, 00:23:54.195 "rdma_max_cq_size": 0, 00:23:54.195 "rdma_cm_event_timeout_ms": 0, 00:23:54.195 "dhchap_digests": [ 00:23:54.195 "sha256", 00:23:54.195 "sha384", 00:23:54.195 "sha512" 00:23:54.195 ], 00:23:54.195 "dhchap_dhgroups": [ 00:23:54.195 "null", 00:23:54.195 "ffdhe2048", 00:23:54.195 "ffdhe3072", 00:23:54.195 "ffdhe4096", 00:23:54.195 "ffdhe6144", 00:23:54.195 "ffdhe8192" 00:23:54.195 ] 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_nvme_attach_controller", 00:23:54.195 "params": { 00:23:54.195 "name": "TLSTEST", 00:23:54.195 "trtype": "TCP", 00:23:54.195 "adrfam": "IPv4", 00:23:54.195 "traddr": "10.0.0.2", 00:23:54.195 "trsvcid": "4420", 00:23:54.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.195 "prchk_reftag": false, 00:23:54.195 "prchk_guard": false, 00:23:54.195 "ctrlr_loss_timeout_sec": 0, 00:23:54.195 "reconnect_delay_sec": 0, 00:23:54.195 "fast_io_fail_timeout_sec": 0, 00:23:54.195 "psk": "key0", 00:23:54.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.195 "hdgst": false, 00:23:54.195 "ddgst": false, 00:23:54.195 "multipath": "multipath" 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_nvme_set_hotplug", 00:23:54.195 "params": { 00:23:54.195 "period_us": 100000, 00:23:54.195 "enable": false 00:23:54.195 } 00:23:54.195 }, 00:23:54.195 { 00:23:54.195 "method": "bdev_wait_for_examine" 00:23:54.196 } 00:23:54.196 ] 00:23:54.196 }, 00:23:54.196 { 00:23:54.196 "subsystem": "nbd", 00:23:54.196 "config": [] 00:23:54.196 } 00:23:54.196 ] 00:23:54.196 }' 00:23:54.196 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.196 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.196 [2024-10-14 13:34:45.932050] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:23:54.196 [2024-10-14 13:34:45.932141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277568 ] 00:23:54.196 [2024-10-14 13:34:45.991370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.196 [2024-10-14 13:34:46.036472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.453 [2024-10-14 13:34:46.210777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.710 13:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.711 13:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.711 13:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:54.711 Running I/O for 10 seconds... 00:23:57.014 3387.00 IOPS, 13.23 MiB/s [2024-10-14T11:34:49.800Z] 3301.00 IOPS, 12.89 MiB/s [2024-10-14T11:34:50.734Z] 3362.67 IOPS, 13.14 MiB/s [2024-10-14T11:34:51.666Z] 3399.50 IOPS, 13.28 MiB/s [2024-10-14T11:34:52.597Z] 3396.20 IOPS, 13.27 MiB/s [2024-10-14T11:34:53.530Z] 3388.00 IOPS, 13.23 MiB/s [2024-10-14T11:34:54.460Z] 3390.57 IOPS, 13.24 MiB/s [2024-10-14T11:34:55.832Z] 3380.00 IOPS, 13.20 MiB/s [2024-10-14T11:34:56.764Z] 3361.67 IOPS, 13.13 MiB/s [2024-10-14T11:34:56.764Z] 3374.50 IOPS, 13.18 MiB/s 00:24:04.911 Latency(us) 00:24:04.911 [2024-10-14T11:34:56.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.911 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.911 Verification LBA range: start 0x0 length 0x2000 00:24:04.911 TLSTESTn1 : 10.02 3380.42 13.20 0.00 0.00 37801.38 6893.42 33981.63 00:24:04.912 [2024-10-14T11:34:56.765Z] =================================================================================================================== 00:24:04.912 [2024-10-14T11:34:56.765Z] Total : 3380.42 13.20 0.00 0.00 37801.38 6893.42 33981.63 00:24:04.912 { 00:24:04.912 "results": [ 00:24:04.912 { 00:24:04.912 "job": "TLSTESTn1", 00:24:04.912 "core_mask": "0x4", 00:24:04.912 "workload": "verify", 00:24:04.912 "status": "finished", 00:24:04.912 "verify_range": { 00:24:04.912 "start": 0, 00:24:04.912 "length": 8192 00:24:04.912 }, 00:24:04.912 "queue_depth": 128, 00:24:04.912 "io_size": 4096, 00:24:04.912 "runtime": 10.019748, 00:24:04.912 "iops": 3380.4243380172834, 00:24:04.912 "mibps": 13.204782570380013, 00:24:04.912 "io_failed": 0, 00:24:04.912 "io_timeout": 0, 00:24:04.912 "avg_latency_us": 37801.37508459657, 00:24:04.912 "min_latency_us": 6893.416296296296, 00:24:04.912 "max_latency_us": 33981.62962962963 00:24:04.912 } 00:24:04.912 ], 00:24:04.912 "core_count": 1 00:24:04.912 } 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 277568 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277568 ']' 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277568 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277568 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277568' 00:24:04.912 killing process with pid 277568 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277568 00:24:04.912 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.912 00:24:04.912 Latency(us) 00:24:04.912 [2024-10-14T11:34:56.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.912 [2024-10-14T11:34:56.765Z] =================================================================================================================== 00:24:04.912 [2024-10-14T11:34:56.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277568 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 277420 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 277420 ']' 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 277420 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.912 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 277420 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 277420' 00:24:05.171 killing process with pid 277420 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 277420 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 277420 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=278772 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 278772 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 278772 ']' 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.171 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.429 [2024-10-14 13:34:57.041515] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:05.429 [2024-10-14 13:34:57.041609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.429 [2024-10-14 13:34:57.105336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.429 [2024-10-14 13:34:57.148695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.429 [2024-10-14 13:34:57.148775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.429 [2024-10-14 13:34:57.148788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.429 [2024-10-14 13:34:57.148813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.429 [2024-10-14 13:34:57.148822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.429 [2024-10-14 13:34:57.149362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.429 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.429 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:05.429 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:05.429 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.429 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.687 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.687 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BLZfvXNnZ9 00:24:05.687 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BLZfvXNnZ9 00:24:05.687 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:05.944 [2024-10-14 13:34:57.550045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.945 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:06.202 13:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:06.460 [2024-10-14 13:34:58.091550] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.460 [2024-10-14 13:34:58.091808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.460 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:06.718 malloc0 00:24:06.718 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:06.976 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:24:07.234 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=279062 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 279062 /var/tmp/bdevperf.sock 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279062 ']' 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.492 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.492 [2024-10-14 13:34:59.244280] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:07.492 [2024-10-14 13:34:59.244368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279062 ] 00:24:07.492 [2024-10-14 13:34:59.301997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.751 [2024-10-14 13:34:59.347968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.751 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.751 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:07.751 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:24:08.008 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:08.266 [2024-10-14 13:34:59.983362] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.266 nvme0n1 00:24:08.266 13:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:08.524 Running I/O for 1 seconds... 00:24:09.457 3500.00 IOPS, 13.67 MiB/s 00:24:09.457 Latency(us) 00:24:09.457 [2024-10-14T11:35:01.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.457 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:09.457 Verification LBA range: start 0x0 length 0x2000 00:24:09.457 nvme0n1 : 1.02 3557.65 13.90 0.00 0.00 35652.81 6262.33 32622.36 00:24:09.457 [2024-10-14T11:35:01.310Z] =================================================================================================================== 00:24:09.457 [2024-10-14T11:35:01.310Z] Total : 3557.65 13.90 0.00 0.00 35652.81 6262.33 32622.36 00:24:09.457 { 00:24:09.457 "results": [ 00:24:09.457 { 00:24:09.457 "job": "nvme0n1", 00:24:09.457 "core_mask": "0x2", 00:24:09.457 "workload": "verify", 00:24:09.457 "status": "finished", 00:24:09.457 "verify_range": { 00:24:09.457 "start": 0, 00:24:09.457 "length": 8192 00:24:09.457 }, 00:24:09.457 "queue_depth": 128, 00:24:09.457 "io_size": 4096, 00:24:09.457 "runtime": 1.020054, 00:24:09.457 "iops": 3557.6547908247994, 00:24:09.457 "mibps": 13.897089026659373, 00:24:09.457 "io_failed": 0, 00:24:09.457 "io_timeout": 0, 00:24:09.457 "avg_latency_us": 35652.81334762153, 00:24:09.457 "min_latency_us": 6262.328888888889, 00:24:09.458 "max_latency_us": 32622.364444444444 00:24:09.458 } 00:24:09.458 ], 00:24:09.458 "core_count": 1 00:24:09.458 } 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 279062 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279062 ']' 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279062 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279062 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279062' 00:24:09.458 killing process with pid 279062 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279062 00:24:09.458 Received shutdown signal, test time was about 1.000000 seconds 00:24:09.458 00:24:09.458 Latency(us) 00:24:09.458 [2024-10-14T11:35:01.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.458 [2024-10-14T11:35:01.311Z] =================================================================================================================== 00:24:09.458 [2024-10-14T11:35:01.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.458 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279062 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 278772 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 278772 ']' 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 278772 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 278772 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 278772' 00:24:09.715 killing process with pid 278772 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 278772 00:24:09.715 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 278772 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=279337 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 279337 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279337 ']' 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.973 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.973 [2024-10-14 13:35:01.761086] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:09.973 [2024-10-14 13:35:01.761186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.973 [2024-10-14 13:35:01.826291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.232 [2024-10-14 13:35:01.869704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.232 [2024-10-14 13:35:01.869766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.232 [2024-10-14 13:35:01.869794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.232 [2024-10-14 13:35:01.869805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.232 [2024-10-14 13:35:01.869814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.232 [2024-10-14 13:35:01.870393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.232 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.232 [2024-10-14 13:35:02.033079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.232 malloc0 00:24:10.232 [2024-10-14 13:35:02.064810] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.232 [2024-10-14 13:35:02.065076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=279479 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 279479 /var/tmp/bdevperf.sock 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279479 ']' 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:10.490 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.491 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.491 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.491 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.491 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.491 [2024-10-14 13:35:02.135459] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:10.491 [2024-10-14 13:35:02.135536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279479 ] 00:24:10.491 [2024-10-14 13:35:02.192939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.491 [2024-10-14 13:35:02.237682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.748 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.748 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:10.748 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BLZfvXNnZ9 00:24:11.007 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:11.265 [2024-10-14 13:35:02.896055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.265 nvme0n1 00:24:11.265 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.265 Running I/O for 1 seconds... 00:24:12.638 3276.00 IOPS, 12.80 MiB/s 00:24:12.638 Latency(us) 00:24:12.638 [2024-10-14T11:35:04.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.639 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:12.639 Verification LBA range: start 0x0 length 0x2000 00:24:12.639 nvme0n1 : 1.02 3338.39 13.04 0.00 0.00 37973.40 6189.51 55147.33 00:24:12.639 [2024-10-14T11:35:04.492Z] =================================================================================================================== 00:24:12.639 [2024-10-14T11:35:04.492Z] Total : 3338.39 13.04 0.00 0.00 37973.40 6189.51 55147.33 00:24:12.639 { 00:24:12.639 "results": [ 00:24:12.639 { 00:24:12.639 "job": "nvme0n1", 00:24:12.639 "core_mask": "0x2", 00:24:12.639 "workload": "verify", 00:24:12.639 "status": "finished", 00:24:12.639 "verify_range": { 00:24:12.639 "start": 0, 00:24:12.639 "length": 8192 00:24:12.639 }, 00:24:12.639 "queue_depth": 128, 00:24:12.639 "io_size": 4096, 00:24:12.639 "runtime": 1.019652, 00:24:12.639 "iops": 3338.3938834033574, 00:24:12.639 "mibps": 13.040601107044365, 00:24:12.639 "io_failed": 0, 00:24:12.639 "io_timeout": 0, 00:24:12.639 "avg_latency_us": 37973.39795447622, 00:24:12.639 "min_latency_us": 6189.511111111111, 00:24:12.639 "max_latency_us": 55147.33037037037 00:24:12.639 } 00:24:12.639 ], 00:24:12.639 "core_count": 1 00:24:12.639 } 00:24:12.639 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:12.639 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.639 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.639 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.639 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:12.639 "subsystems": [ 00:24:12.639 { 00:24:12.639 "subsystem": "keyring", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "keyring_file_add_key", 00:24:12.639 "params": { 00:24:12.639 "name": "key0", 00:24:12.639 "path": "/tmp/tmp.BLZfvXNnZ9" 00:24:12.639 } 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "iobuf", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "iobuf_set_options", 00:24:12.639 "params": { 00:24:12.639 "small_pool_count": 8192, 00:24:12.639 "large_pool_count": 1024, 00:24:12.639 "small_bufsize": 8192, 00:24:12.639 "large_bufsize": 135168 00:24:12.639 } 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "sock", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "sock_set_default_impl", 00:24:12.639 "params": { 00:24:12.639 "impl_name": "posix" 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "sock_impl_set_options", 00:24:12.639 "params": { 00:24:12.639 "impl_name": "ssl", 00:24:12.639 "recv_buf_size": 4096, 00:24:12.639 "send_buf_size": 4096, 00:24:12.639 "enable_recv_pipe": true, 00:24:12.639 "enable_quickack": false, 00:24:12.639 "enable_placement_id": 0, 00:24:12.639 "enable_zerocopy_send_server": true, 00:24:12.639 "enable_zerocopy_send_client": false, 00:24:12.639 "zerocopy_threshold": 0, 00:24:12.639 "tls_version": 0, 00:24:12.639 "enable_ktls": false 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "sock_impl_set_options", 00:24:12.639 "params": { 00:24:12.639 "impl_name": "posix", 00:24:12.639 "recv_buf_size": 2097152, 00:24:12.639 "send_buf_size": 2097152, 00:24:12.639 "enable_recv_pipe": true, 00:24:12.639 "enable_quickack": false, 00:24:12.639 "enable_placement_id": 0, 00:24:12.639 "enable_zerocopy_send_server": true, 00:24:12.639 "enable_zerocopy_send_client": false, 00:24:12.639 "zerocopy_threshold": 0, 00:24:12.639 "tls_version": 0, 00:24:12.639 "enable_ktls": false 00:24:12.639 } 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "vmd", 00:24:12.639 "config": [] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "accel", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "accel_set_options", 00:24:12.639 "params": { 00:24:12.639 "small_cache_size": 128, 00:24:12.639 "large_cache_size": 16, 00:24:12.639 "task_count": 2048, 00:24:12.639 "sequence_count": 2048, 00:24:12.639 "buf_count": 2048 00:24:12.639 } 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "bdev", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "bdev_set_options", 00:24:12.639 "params": { 00:24:12.639 "bdev_io_pool_size": 65535, 00:24:12.639 "bdev_io_cache_size": 256, 00:24:12.639 "bdev_auto_examine": true, 00:24:12.639 "iobuf_small_cache_size": 128, 00:24:12.639 "iobuf_large_cache_size": 16 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_raid_set_options", 00:24:12.639 "params": { 00:24:12.639 "process_window_size_kb": 1024, 00:24:12.639 "process_max_bandwidth_mb_sec": 0 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_iscsi_set_options", 00:24:12.639 "params": { 00:24:12.639 "timeout_sec": 30 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_nvme_set_options", 00:24:12.639 "params": { 00:24:12.639 "action_on_timeout": "none", 00:24:12.639 "timeout_us": 0, 00:24:12.639 "timeout_admin_us": 0, 00:24:12.639 "keep_alive_timeout_ms": 10000, 00:24:12.639 "arbitration_burst": 0, 00:24:12.639 "low_priority_weight": 0, 00:24:12.639 "medium_priority_weight": 0, 00:24:12.639 "high_priority_weight": 0, 00:24:12.639 "nvme_adminq_poll_period_us": 10000, 00:24:12.639 "nvme_ioq_poll_period_us": 0, 00:24:12.639 "io_queue_requests": 0, 00:24:12.639 "delay_cmd_submit": true, 00:24:12.639 "transport_retry_count": 4, 00:24:12.639 "bdev_retry_count": 3, 00:24:12.639 "transport_ack_timeout": 0, 00:24:12.639 "ctrlr_loss_timeout_sec": 0, 00:24:12.639 "reconnect_delay_sec": 0, 00:24:12.639 "fast_io_fail_timeout_sec": 0, 00:24:12.639 "disable_auto_failback": false, 00:24:12.639 "generate_uuids": false, 00:24:12.639 "transport_tos": 0, 00:24:12.639 "nvme_error_stat": false, 00:24:12.639 "rdma_srq_size": 0, 00:24:12.639 "io_path_stat": false, 00:24:12.639 "allow_accel_sequence": false, 00:24:12.639 "rdma_max_cq_size": 0, 00:24:12.639 "rdma_cm_event_timeout_ms": 0, 00:24:12.639 "dhchap_digests": [ 00:24:12.639 "sha256", 00:24:12.639 "sha384", 00:24:12.639 "sha512" 00:24:12.639 ], 00:24:12.639 "dhchap_dhgroups": [ 00:24:12.639 "null", 00:24:12.639 "ffdhe2048", 00:24:12.639 "ffdhe3072", 00:24:12.639 "ffdhe4096", 00:24:12.639 "ffdhe6144", 00:24:12.639 "ffdhe8192" 00:24:12.639 ] 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_nvme_set_hotplug", 00:24:12.639 "params": { 00:24:12.639 "period_us": 100000, 00:24:12.639 "enable": false 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_malloc_create", 00:24:12.639 "params": { 00:24:12.639 "name": "malloc0", 00:24:12.639 "num_blocks": 8192, 00:24:12.639 "block_size": 4096, 00:24:12.639 "physical_block_size": 4096, 00:24:12.639 "uuid": "d3b65847-e838-4b8e-98aa-685afba5d5b1", 00:24:12.639 "optimal_io_boundary": 0, 00:24:12.639 "md_size": 0, 00:24:12.639 "dif_type": 0, 00:24:12.639 "dif_is_head_of_md": false, 00:24:12.639 "dif_pi_format": 0 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "bdev_wait_for_examine" 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "nbd", 00:24:12.639 "config": [] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "scheduler", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "framework_set_scheduler", 00:24:12.639 "params": { 00:24:12.639 "name": "static" 00:24:12.639 } 00:24:12.639 } 00:24:12.639 ] 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "subsystem": "nvmf", 00:24:12.639 "config": [ 00:24:12.639 { 00:24:12.639 "method": "nvmf_set_config", 00:24:12.639 "params": { 00:24:12.639 "discovery_filter": "match_any", 00:24:12.639 "admin_cmd_passthru": { 00:24:12.639 "identify_ctrlr": false 00:24:12.639 }, 00:24:12.639 "dhchap_digests": [ 00:24:12.639 "sha256", 00:24:12.639 "sha384", 00:24:12.639 "sha512" 00:24:12.639 ], 00:24:12.639 "dhchap_dhgroups": [ 00:24:12.639 "null", 00:24:12.639 "ffdhe2048", 00:24:12.639 "ffdhe3072", 00:24:12.639 "ffdhe4096", 00:24:12.639 "ffdhe6144", 00:24:12.639 "ffdhe8192" 00:24:12.639 ] 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "nvmf_set_max_subsystems", 00:24:12.639 "params": { 00:24:12.639 "max_subsystems": 1024 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "nvmf_set_crdt", 00:24:12.639 "params": { 00:24:12.639 "crdt1": 0, 00:24:12.639 "crdt2": 0, 00:24:12.639 "crdt3": 0 00:24:12.639 } 00:24:12.639 }, 00:24:12.639 { 00:24:12.639 "method": "nvmf_create_transport", 00:24:12.639 "params": { 00:24:12.639 "trtype": "TCP", 00:24:12.639 "max_queue_depth": 128, 00:24:12.639 "max_io_qpairs_per_ctrlr": 127, 00:24:12.639 "in_capsule_data_size": 4096, 00:24:12.639 "max_io_size": 131072, 00:24:12.639 "io_unit_size": 131072, 00:24:12.639 "max_aq_depth": 128, 00:24:12.639 "num_shared_buffers": 511, 00:24:12.639 "buf_cache_size": 4294967295, 00:24:12.639 "dif_insert_or_strip": false, 00:24:12.640 "zcopy": false, 00:24:12.640 "c2h_success": false, 00:24:12.640 "sock_priority": 0, 00:24:12.640 "abort_timeout_sec": 1, 00:24:12.640 "ack_timeout": 0, 00:24:12.640 "data_wr_pool_size": 0 00:24:12.640 } 00:24:12.640 }, 00:24:12.640 { 00:24:12.640 "method": "nvmf_create_subsystem", 00:24:12.640 "params": { 00:24:12.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.640 "allow_any_host": false, 00:24:12.640 "serial_number": "00000000000000000000", 00:24:12.640 "model_number": "SPDK bdev Controller", 00:24:12.640 "max_namespaces": 32, 00:24:12.640 "min_cntlid": 1, 00:24:12.640 "max_cntlid": 65519, 00:24:12.640 "ana_reporting": false 00:24:12.640 } 00:24:12.640 }, 00:24:12.640 { 00:24:12.640 "method": "nvmf_subsystem_add_host", 00:24:12.640 "params": { 00:24:12.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.640 "host": "nqn.2016-06.io.spdk:host1", 00:24:12.640 "psk": "key0" 00:24:12.640 } 00:24:12.640 }, 00:24:12.640 { 00:24:12.640 "method": "nvmf_subsystem_add_ns", 00:24:12.640 "params": { 00:24:12.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.640 "namespace": { 00:24:12.640 "nsid": 1, 00:24:12.640 "bdev_name": "malloc0", 00:24:12.640 "nguid": "D3B65847E8384B8E98AA685AFBA5D5B1", 00:24:12.640 "uuid": "d3b65847-e838-4b8e-98aa-685afba5d5b1", 00:24:12.640 "no_auto_visible": false 00:24:12.640 } 00:24:12.640 } 00:24:12.640 }, 00:24:12.640 { 00:24:12.640 "method": "nvmf_subsystem_add_listener", 00:24:12.640 "params": { 00:24:12.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.640 "listen_address": { 00:24:12.640 "trtype": "TCP", 00:24:12.640 "adrfam": "IPv4", 00:24:12.640 "traddr": "10.0.0.2", 00:24:12.640 "trsvcid": "4420" 00:24:12.640 }, 00:24:12.640 "secure_channel": false, 00:24:12.640 "sock_impl": "ssl" 00:24:12.640 } 00:24:12.640 } 00:24:12.640 ] 00:24:12.640 } 00:24:12.640 ] 00:24:12.640 }' 00:24:12.640 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:12.898 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:12.898 "subsystems": [ 00:24:12.898 { 00:24:12.898 "subsystem": "keyring", 00:24:12.898 "config": [ 00:24:12.898 { 00:24:12.898 "method": "keyring_file_add_key", 00:24:12.898 "params": { 00:24:12.898 "name": "key0", 00:24:12.898 "path": "/tmp/tmp.BLZfvXNnZ9" 00:24:12.898 } 00:24:12.898 } 00:24:12.898 ] 00:24:12.898 }, 00:24:12.898 { 00:24:12.898 "subsystem": "iobuf", 00:24:12.899 "config": [ 00:24:12.899 { 00:24:12.899 "method": "iobuf_set_options", 00:24:12.899 "params": { 00:24:12.899 "small_pool_count": 8192, 00:24:12.899 "large_pool_count": 1024, 00:24:12.899 "small_bufsize": 8192, 00:24:12.899 "large_bufsize": 135168 00:24:12.899 } 00:24:12.899 } 00:24:12.899 ] 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "subsystem": "sock", 00:24:12.899 "config": [ 00:24:12.899 { 00:24:12.899 "method": "sock_set_default_impl", 00:24:12.899 "params": { 00:24:12.899 "impl_name": "posix" 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "sock_impl_set_options", 00:24:12.899 "params": { 00:24:12.899 "impl_name": "ssl", 00:24:12.899 "recv_buf_size": 4096, 00:24:12.899 "send_buf_size": 4096, 00:24:12.899 "enable_recv_pipe": true, 00:24:12.899 "enable_quickack": false, 00:24:12.899 "enable_placement_id": 0, 00:24:12.899 "enable_zerocopy_send_server": true, 00:24:12.899 "enable_zerocopy_send_client": false, 00:24:12.899 "zerocopy_threshold": 0, 00:24:12.899 "tls_version": 0, 00:24:12.899 "enable_ktls": false 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "sock_impl_set_options", 00:24:12.899 "params": { 00:24:12.899 "impl_name": "posix", 00:24:12.899 "recv_buf_size": 2097152, 00:24:12.899 "send_buf_size": 2097152, 00:24:12.899 "enable_recv_pipe": true, 00:24:12.899 "enable_quickack": false, 00:24:12.899 "enable_placement_id": 0, 00:24:12.899 "enable_zerocopy_send_server": true, 00:24:12.899 "enable_zerocopy_send_client": false, 00:24:12.899 "zerocopy_threshold": 0, 00:24:12.899 "tls_version": 0, 00:24:12.899 "enable_ktls": false 00:24:12.899 } 00:24:12.899 } 00:24:12.899 ] 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "subsystem": "vmd", 00:24:12.899 "config": [] 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "subsystem": "accel", 00:24:12.899 "config": [ 00:24:12.899 { 00:24:12.899 "method": "accel_set_options", 00:24:12.899 "params": { 00:24:12.899 "small_cache_size": 128, 00:24:12.899 "large_cache_size": 16, 00:24:12.899 "task_count": 2048, 00:24:12.899 "sequence_count": 2048, 00:24:12.899 "buf_count": 2048 00:24:12.899 } 00:24:12.899 } 00:24:12.899 ] 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "subsystem": "bdev", 00:24:12.899 "config": [ 00:24:12.899 { 00:24:12.899 "method": "bdev_set_options", 00:24:12.899 "params": { 00:24:12.899 "bdev_io_pool_size": 65535, 00:24:12.899 "bdev_io_cache_size": 256, 00:24:12.899 "bdev_auto_examine": true, 00:24:12.899 "iobuf_small_cache_size": 128, 00:24:12.899 "iobuf_large_cache_size": 16 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_raid_set_options", 00:24:12.899 "params": { 00:24:12.899 "process_window_size_kb": 1024, 00:24:12.899 "process_max_bandwidth_mb_sec": 0 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_iscsi_set_options", 00:24:12.899 "params": { 00:24:12.899 "timeout_sec": 30 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_nvme_set_options", 00:24:12.899 "params": { 00:24:12.899 "action_on_timeout": "none", 00:24:12.899 "timeout_us": 0, 00:24:12.899 "timeout_admin_us": 0, 00:24:12.899 "keep_alive_timeout_ms": 10000, 00:24:12.899 "arbitration_burst": 0, 00:24:12.899 "low_priority_weight": 0, 00:24:12.899 "medium_priority_weight": 0, 00:24:12.899 "high_priority_weight": 0, 00:24:12.899 "nvme_adminq_poll_period_us": 10000, 00:24:12.899 "nvme_ioq_poll_period_us": 0, 00:24:12.899 "io_queue_requests": 512, 00:24:12.899 "delay_cmd_submit": true, 00:24:12.899 "transport_retry_count": 4, 00:24:12.899 "bdev_retry_count": 3, 00:24:12.899 "transport_ack_timeout": 0, 00:24:12.899 "ctrlr_loss_timeout_sec": 0, 00:24:12.899 "reconnect_delay_sec": 0, 00:24:12.899 "fast_io_fail_timeout_sec": 0, 00:24:12.899 "disable_auto_failback": false, 00:24:12.899 "generate_uuids": false, 00:24:12.899 "transport_tos": 0, 00:24:12.899 "nvme_error_stat": false, 00:24:12.899 "rdma_srq_size": 0, 00:24:12.899 "io_path_stat": false, 00:24:12.899 "allow_accel_sequence": false, 00:24:12.899 "rdma_max_cq_size": 0, 00:24:12.899 "rdma_cm_event_timeout_ms": 0, 00:24:12.899 "dhchap_digests": [ 00:24:12.899 "sha256", 00:24:12.899 "sha384", 00:24:12.899 "sha512" 00:24:12.899 ], 00:24:12.899 "dhchap_dhgroups": [ 00:24:12.899 "null", 00:24:12.899 "ffdhe2048", 00:24:12.899 "ffdhe3072", 00:24:12.899 "ffdhe4096", 00:24:12.899 "ffdhe6144", 00:24:12.899 "ffdhe8192" 00:24:12.899 ] 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_nvme_attach_controller", 00:24:12.899 "params": { 00:24:12.899 "name": "nvme0", 00:24:12.899 "trtype": "TCP", 00:24:12.899 "adrfam": "IPv4", 00:24:12.899 "traddr": "10.0.0.2", 00:24:12.899 "trsvcid": "4420", 00:24:12.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.899 "prchk_reftag": false, 00:24:12.899 "prchk_guard": false, 00:24:12.899 "ctrlr_loss_timeout_sec": 0, 00:24:12.899 "reconnect_delay_sec": 0, 00:24:12.899 "fast_io_fail_timeout_sec": 0, 00:24:12.899 "psk": "key0", 00:24:12.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.899 "hdgst": false, 00:24:12.899 "ddgst": false, 00:24:12.899 "multipath": "multipath" 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_nvme_set_hotplug", 00:24:12.899 "params": { 00:24:12.899 "period_us": 100000, 00:24:12.899 "enable": false 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_enable_histogram", 00:24:12.899 "params": { 00:24:12.899 "name": "nvme0n1", 00:24:12.899 "enable": true 00:24:12.899 } 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "method": "bdev_wait_for_examine" 00:24:12.899 } 00:24:12.899 ] 00:24:12.899 }, 00:24:12.899 { 00:24:12.899 "subsystem": "nbd", 00:24:12.899 "config": [] 00:24:12.899 } 00:24:12.899 ] 00:24:12.899 }' 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 279479 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279479 ']' 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279479 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279479 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279479' 00:24:12.899 killing process with pid 279479 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279479 00:24:12.899 Received shutdown signal, test time was about 1.000000 seconds 00:24:12.899 00:24:12.899 Latency(us) 00:24:12.899 [2024-10-14T11:35:04.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.899 [2024-10-14T11:35:04.752Z] =================================================================================================================== 00:24:12.899 [2024-10-14T11:35:04.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.899 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279479 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279337 ']' 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279337' 00:24:13.160 killing process with pid 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279337 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:13.160 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:13.160 "subsystems": [ 00:24:13.160 { 00:24:13.160 "subsystem": "keyring", 00:24:13.160 "config": [ 00:24:13.160 { 00:24:13.160 "method": "keyring_file_add_key", 00:24:13.160 "params": { 00:24:13.160 "name": "key0", 00:24:13.160 "path": "/tmp/tmp.BLZfvXNnZ9" 00:24:13.160 } 00:24:13.160 } 00:24:13.160 ] 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "subsystem": "iobuf", 00:24:13.160 "config": [ 00:24:13.160 { 00:24:13.160 "method": "iobuf_set_options", 00:24:13.160 "params": { 00:24:13.160 "small_pool_count": 8192, 00:24:13.160 "large_pool_count": 1024, 00:24:13.160 "small_bufsize": 8192, 00:24:13.160 "large_bufsize": 135168 00:24:13.160 } 00:24:13.160 } 00:24:13.160 ] 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "subsystem": "sock", 00:24:13.160 "config": [ 00:24:13.160 { 00:24:13.160 "method": "sock_set_default_impl", 00:24:13.160 "params": { 00:24:13.160 "impl_name": "posix" 00:24:13.160 } 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "method": "sock_impl_set_options", 00:24:13.160 "params": { 00:24:13.160 "impl_name": "ssl", 00:24:13.160 "recv_buf_size": 4096, 00:24:13.160 "send_buf_size": 4096, 00:24:13.160 "enable_recv_pipe": true, 00:24:13.160 "enable_quickack": false, 00:24:13.160 "enable_placement_id": 0, 00:24:13.160 "enable_zerocopy_send_server": true, 00:24:13.160 "enable_zerocopy_send_client": false, 00:24:13.160 "zerocopy_threshold": 0, 00:24:13.160 "tls_version": 0, 00:24:13.160 "enable_ktls": false 00:24:13.160 } 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "method": "sock_impl_set_options", 00:24:13.160 "params": { 00:24:13.160 "impl_name": "posix", 00:24:13.160 "recv_buf_size": 2097152, 00:24:13.160 "send_buf_size": 2097152, 00:24:13.160 "enable_recv_pipe": true, 00:24:13.160 "enable_quickack": false, 00:24:13.160 "enable_placement_id": 0, 00:24:13.160 "enable_zerocopy_send_server": true, 00:24:13.160 "enable_zerocopy_send_client": false, 00:24:13.160 "zerocopy_threshold": 0, 00:24:13.160 "tls_version": 0, 00:24:13.160 "enable_ktls": false 00:24:13.160 } 00:24:13.160 } 00:24:13.160 ] 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "subsystem": "vmd", 00:24:13.160 "config": [] 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "subsystem": "accel", 00:24:13.160 "config": [ 00:24:13.160 { 00:24:13.160 "method": "accel_set_options", 00:24:13.160 "params": { 00:24:13.160 "small_cache_size": 128, 00:24:13.160 "large_cache_size": 16, 00:24:13.160 "task_count": 2048, 00:24:13.160 "sequence_count": 2048, 00:24:13.160 "buf_count": 2048 00:24:13.160 } 00:24:13.160 } 00:24:13.160 ] 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "subsystem": "bdev", 00:24:13.160 "config": [ 00:24:13.160 { 00:24:13.160 "method": "bdev_set_options", 00:24:13.160 "params": { 00:24:13.160 "bdev_io_pool_size": 65535, 00:24:13.160 "bdev_io_cache_size": 256, 00:24:13.160 "bdev_auto_examine": true, 00:24:13.160 "iobuf_small_cache_size": 128, 00:24:13.160 "iobuf_large_cache_size": 16 00:24:13.160 } 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "method": "bdev_raid_set_options", 00:24:13.160 "params": { 00:24:13.160 "process_window_size_kb": 1024, 00:24:13.160 "process_max_bandwidth_mb_sec": 0 00:24:13.160 } 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "method": "bdev_iscsi_set_options", 00:24:13.160 "params": { 00:24:13.160 "timeout_sec": 30 00:24:13.160 } 00:24:13.160 }, 00:24:13.160 { 00:24:13.160 "method": "bdev_nvme_set_options", 00:24:13.160 "params": { 00:24:13.160 "action_on_timeout": "none", 00:24:13.160 "timeout_us": 0, 00:24:13.160 "timeout_admin_us": 0, 00:24:13.160 "keep_alive_timeout_ms": 10000, 00:24:13.160 "arbitration_burst": 0, 00:24:13.160 "low_priority_weight": 0, 00:24:13.160 "medium_priority_weight": 0, 00:24:13.160 "high_priority_weight": 0, 00:24:13.160 "nvme_adminq_poll_period_us": 10000, 00:24:13.160 "nvme_ioq_poll_period_us": 0, 00:24:13.160 "io_queue_requests": 0, 00:24:13.160 "delay_cmd_submit": true, 00:24:13.160 "transport_retry_count": 4, 00:24:13.160 "bdev_retry_count": 3, 00:24:13.160 "transport_ack_timeout": 0, 00:24:13.160 "ctrlr_loss_timeout_sec": 0, 00:24:13.161 "reconnect_delay_sec": 0, 00:24:13.161 "fast_io_fail_timeout_sec": 0, 00:24:13.161 "disable_auto_failback": false, 00:24:13.161 "generate_uuids": false, 00:24:13.161 "transport_tos": 0, 00:24:13.161 "nvme_error_stat": false, 00:24:13.161 "rdma_srq_size": 0, 00:24:13.161 "io_path_stat": false, 00:24:13.161 "allow_accel_sequence": false, 00:24:13.161 "rdma_max_cq_size": 0, 00:24:13.161 "rdma_cm_event_timeout_ms": 0, 00:24:13.161 "dhchap_digests": [ 00:24:13.161 "sha256", 00:24:13.161 "sha384", 00:24:13.161 "sha512" 00:24:13.161 ], 00:24:13.161 "dhchap_dhgroups": [ 00:24:13.161 "null", 00:24:13.161 "ffdhe2048", 00:24:13.161 "ffdhe3072", 00:24:13.161 "ffdhe4096", 00:24:13.161 "ffdhe6144", 00:24:13.161 "ffdhe8192" 00:24:13.161 ] 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "bdev_nvme_set_hotplug", 00:24:13.161 "params": { 00:24:13.161 "period_us": 100000, 00:24:13.161 "enable": false 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "bdev_malloc_create", 00:24:13.161 "params": { 00:24:13.161 "name": "malloc0", 00:24:13.161 "num_blocks": 8192, 00:24:13.161 "block_size": 4096, 00:24:13.161 "physical_block_size": 4096, 00:24:13.161 "uuid": "d3b65847-e838-4b8e-98aa-685afba5d5b1", 00:24:13.161 "optimal_io_boundary": 0, 00:24:13.161 "md_size": 0, 00:24:13.161 "dif_type": 0, 00:24:13.161 "dif_is_head_of_md": false, 00:24:13.161 "dif_pi_format": 0 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "bdev_wait_for_examine" 00:24:13.161 } 00:24:13.161 ] 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "subsystem": "nbd", 00:24:13.161 "config": [] 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "subsystem": "scheduler", 00:24:13.161 "config": [ 00:24:13.161 { 00:24:13.161 "method": "framework_set_scheduler", 00:24:13.161 "params": { 00:24:13.161 "name": "static" 00:24:13.161 } 00:24:13.161 } 00:24:13.161 ] 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "subsystem": "nvmf", 00:24:13.161 "config": [ 00:24:13.161 { 00:24:13.161 "method": "nvmf_set_config", 00:24:13.161 "params": { 00:24:13.161 "discovery_filter": "match_any", 00:24:13.161 "admin_cmd_passthru": { 00:24:13.161 "identify_ctrlr": false 00:24:13.161 }, 00:24:13.161 "dhchap_digests": [ 00:24:13.161 "sha256", 00:24:13.161 "sha384", 00:24:13.161 "sha512" 00:24:13.161 ], 00:24:13.161 "dhchap_dhgroups": [ 00:24:13.161 "null", 00:24:13.161 "ffdhe2048", 00:24:13.161 "ffdhe3072", 00:24:13.161 "ffdhe4096", 00:24:13.161 "ffdhe6144", 00:24:13.161 "ffdhe8192" 00:24:13.161 ] 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_set_max_subsystems", 00:24:13.161 "params": { 00:24:13.161 "max_subsystems": 1024 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_set_crdt", 00:24:13.161 "params": { 00:24:13.161 "crdt1": 0, 00:24:13.161 "crdt2": 0, 00:24:13.161 "crdt3": 0 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_create_transport", 00:24:13.161 "params": { 00:24:13.161 "trtype": "TCP", 00:24:13.161 "max_queue_depth": 128, 00:24:13.161 "max_io_qpairs_per_ctrlr": 127, 00:24:13.161 "in_capsule_data_size": 4096, 00:24:13.161 "max_io_size": 131072, 00:24:13.161 "io_unit_size": 131072, 00:24:13.161 "max_aq_depth": 128, 00:24:13.161 "num_shared_buffers": 511, 00:24:13.161 "buf_cache_size": 4294967295, 00:24:13.161 "dif_insert_or_strip": false, 00:24:13.161 "zcopy": false, 00:24:13.161 "c2h_success": false, 00:24:13.161 "sock_priority": 0, 00:24:13.161 "abort_timeout_sec": 1, 00:24:13.161 "ack_timeout": 0, 00:24:13.161 "data_wr_pool_size": 0 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_create_subsystem", 00:24:13.161 "params": { 00:24:13.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.161 "allow_any_host": false, 00:24:13.161 "serial_number": "00000000000000000000", 00:24:13.161 "model_number": "SPDK bdev Controller", 00:24:13.161 "max_namespaces": 32, 00:24:13.161 "min_cntlid": 1, 00:24:13.161 "max_cntlid": 65519, 00:24:13.161 "ana_reporting": false 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_subsystem_add_host", 00:24:13.161 "params": { 00:24:13.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.161 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.161 "psk": "key0" 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_subsystem_add_ns", 00:24:13.161 "params": { 00:24:13.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.161 "namespace": { 00:24:13.161 "nsid": 1, 00:24:13.161 "bdev_name": "malloc0", 00:24:13.161 "nguid": "D3B65847E8384B8E98AA685AFBA5D5B1", 00:24:13.161 "uuid": "d3b65847-e838-4b8e-98aa-685afba5d5b1", 00:24:13.161 "no_auto_visible": false 00:24:13.161 } 00:24:13.161 } 00:24:13.161 }, 00:24:13.161 { 00:24:13.161 "method": "nvmf_subsystem_add_listener", 00:24:13.161 "params": { 00:24:13.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.161 "listen_address": { 00:24:13.161 "trtype": "TCP", 00:24:13.161 "adrfam": "IPv4", 00:24:13.161 "traddr": "10.0.0.2", 00:24:13.161 "trsvcid": "4420" 00:24:13.161 }, 00:24:13.161 "secure_channel": false, 00:24:13.161 "sock_impl": "ssl" 00:24:13.161 } 00:24:13.161 } 00:24:13.161 ] 00:24:13.161 } 00:24:13.161 ] 00:24:13.161 }' 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=279775 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 279775 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279775 ']' 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.161 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.420 [2024-10-14 13:35:05.049460] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:13.420 [2024-10-14 13:35:05.049552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.420 [2024-10-14 13:35:05.112856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.420 [2024-10-14 13:35:05.152612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.420 [2024-10-14 13:35:05.152675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.420 [2024-10-14 13:35:05.152704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.420 [2024-10-14 13:35:05.152715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.420 [2024-10-14 13:35:05.152724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.420 [2024-10-14 13:35:05.153319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.678 [2024-10-14 13:35:05.390685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.678 [2024-10-14 13:35:05.422712] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.678 [2024-10-14 13:35:05.422954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=279926 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 279926 /var/tmp/bdevperf.sock 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 279926 ']' 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.244 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:14.244 "subsystems": [ 00:24:14.244 { 00:24:14.244 "subsystem": "keyring", 00:24:14.244 "config": [ 00:24:14.244 { 00:24:14.244 "method": "keyring_file_add_key", 00:24:14.244 "params": { 00:24:14.244 "name": "key0", 00:24:14.244 "path": "/tmp/tmp.BLZfvXNnZ9" 00:24:14.244 } 00:24:14.244 } 00:24:14.244 ] 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "subsystem": "iobuf", 00:24:14.244 "config": [ 00:24:14.244 { 00:24:14.244 "method": "iobuf_set_options", 00:24:14.244 "params": { 00:24:14.244 "small_pool_count": 8192, 00:24:14.244 "large_pool_count": 1024, 00:24:14.244 "small_bufsize": 8192, 00:24:14.244 "large_bufsize": 135168 00:24:14.244 } 00:24:14.244 } 00:24:14.244 ] 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "subsystem": "sock", 00:24:14.244 "config": [ 00:24:14.244 { 00:24:14.244 "method": "sock_set_default_impl", 00:24:14.244 "params": { 00:24:14.244 "impl_name": "posix" 00:24:14.244 } 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "method": "sock_impl_set_options", 00:24:14.244 "params": { 00:24:14.244 "impl_name": "ssl", 00:24:14.244 "recv_buf_size": 4096, 00:24:14.244 "send_buf_size": 4096, 00:24:14.244 "enable_recv_pipe": true, 00:24:14.244 "enable_quickack": false, 00:24:14.244 "enable_placement_id": 0, 00:24:14.244 "enable_zerocopy_send_server": true, 00:24:14.244 "enable_zerocopy_send_client": false, 00:24:14.244 "zerocopy_threshold": 0, 00:24:14.244 "tls_version": 0, 00:24:14.244 "enable_ktls": false 00:24:14.244 } 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "method": "sock_impl_set_options", 00:24:14.244 "params": { 00:24:14.244 "impl_name": "posix", 00:24:14.244 "recv_buf_size": 2097152, 00:24:14.244 "send_buf_size": 2097152, 00:24:14.244 "enable_recv_pipe": true, 00:24:14.244 "enable_quickack": false, 00:24:14.244 "enable_placement_id": 0, 00:24:14.244 "enable_zerocopy_send_server": true, 00:24:14.244 "enable_zerocopy_send_client": false, 00:24:14.244 "zerocopy_threshold": 0, 00:24:14.244 "tls_version": 0, 00:24:14.244 "enable_ktls": false 00:24:14.244 } 00:24:14.244 } 00:24:14.244 ] 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "subsystem": "vmd", 00:24:14.244 "config": [] 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "subsystem": "accel", 00:24:14.244 "config": [ 00:24:14.244 { 00:24:14.244 "method": "accel_set_options", 00:24:14.244 "params": { 00:24:14.244 "small_cache_size": 128, 00:24:14.244 "large_cache_size": 16, 00:24:14.244 "task_count": 2048, 00:24:14.244 "sequence_count": 2048, 00:24:14.244 "buf_count": 2048 00:24:14.244 } 00:24:14.244 } 00:24:14.244 ] 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "subsystem": "bdev", 00:24:14.244 "config": [ 00:24:14.244 { 00:24:14.244 "method": "bdev_set_options", 00:24:14.244 "params": { 00:24:14.244 "bdev_io_pool_size": 65535, 00:24:14.244 "bdev_io_cache_size": 256, 00:24:14.244 "bdev_auto_examine": true, 00:24:14.244 "iobuf_small_cache_size": 128, 00:24:14.244 "iobuf_large_cache_size": 16 00:24:14.244 } 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "method": "bdev_raid_set_options", 00:24:14.244 "params": { 00:24:14.244 "process_window_size_kb": 1024, 00:24:14.244 "process_max_bandwidth_mb_sec": 0 00:24:14.244 } 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "method": "bdev_iscsi_set_options", 00:24:14.244 "params": { 00:24:14.244 "timeout_sec": 30 00:24:14.244 } 00:24:14.244 }, 00:24:14.244 { 00:24:14.244 "method": "bdev_nvme_set_options", 00:24:14.244 "params": { 00:24:14.244 "action_on_timeout": "none", 00:24:14.244 "timeout_us": 0, 00:24:14.244 "timeout_admin_us": 0, 00:24:14.244 "keep_alive_timeout_ms": 10000, 00:24:14.244 "arbitration_burst": 0, 00:24:14.244 "low_priority_weight": 0, 00:24:14.244 "medium_priority_weight": 0, 00:24:14.244 "high_priority_weight": 0, 00:24:14.244 "nvme_adminq_poll_period_us": 10000, 00:24:14.244 "nvme_ioq_poll_period_us": 0, 00:24:14.244 "io_queue_requests": 512, 00:24:14.244 "delay_cmd_submit": true, 00:24:14.244 "transport_retry_count": 4, 00:24:14.244 "bdev_retry_count": 3, 00:24:14.244 "transport_ack_timeout": 0, 00:24:14.244 "ctrlr_loss_timeout_sec": 0, 00:24:14.244 "reconnect_delay_sec": 0, 00:24:14.244 "fast_io_fail_timeout_sec": 0, 00:24:14.244 "disable_auto_failback": false, 00:24:14.244 "generate_uuids": false, 00:24:14.244 "transport_tos": 0, 00:24:14.244 "nvme_error_stat": false, 00:24:14.244 "rdma_srq_size": 0, 00:24:14.245 "io_path_stat": false, 00:24:14.245 "allow_accel_sequence": false, 00:24:14.245 "rdma_max_cq_size": 0, 00:24:14.245 "rdma_cm_event_timeout_ms": 0, 00:24:14.245 "dhchap_digests": [ 00:24:14.245 "sha256", 00:24:14.245 "sha384", 00:24:14.245 "sha512" 00:24:14.245 ], 00:24:14.245 "dhchap_dhgroups": [ 00:24:14.245 "null", 00:24:14.245 "ffdhe2048", 00:24:14.245 "ffdhe3072", 00:24:14.245 "ffdhe4096", 00:24:14.245 "ffdhe6144", 00:24:14.245 "ffdhe8192" 00:24:14.245 ] 00:24:14.245 } 00:24:14.245 }, 00:24:14.245 { 00:24:14.245 "method": "bdev_nvme_attach_controller", 00:24:14.245 "params": { 00:24:14.245 "name": "nvme0", 00:24:14.245 "trtype": "TCP", 00:24:14.245 "adrfam": "IPv4", 00:24:14.245 "traddr": "10.0.0.2", 00:24:14.245 "trsvcid": "4420", 00:24:14.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.245 "prchk_reftag": false, 00:24:14.245 "prchk_guard": false, 00:24:14.245 "ctrlr_loss_timeout_sec": 0, 00:24:14.245 "reconnect_delay_sec": 0, 00:24:14.245 "fast_io_fail_timeout_sec": 0, 00:24:14.245 "psk": "key0", 00:24:14.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.245 "hdgst": false, 00:24:14.245 "ddgst": false, 00:24:14.245 "multipath": "multipath" 00:24:14.245 } 00:24:14.245 }, 00:24:14.245 { 00:24:14.245 "method": "bdev_nvme_set_hotplug", 00:24:14.245 "params": { 00:24:14.245 "period_us": 100000, 00:24:14.245 "enable": false 00:24:14.245 } 00:24:14.245 }, 00:24:14.245 { 00:24:14.245 "method": "bdev_enable_histogram", 00:24:14.245 "params": { 00:24:14.245 "name": "nvme0n1", 00:24:14.245 "enable": true 00:24:14.245 } 00:24:14.245 }, 00:24:14.245 { 00:24:14.245 "method": "bdev_wait_for_examine" 00:24:14.245 } 00:24:14.245 ] 00:24:14.245 }, 00:24:14.245 { 00:24:14.245 "subsystem": "nbd", 00:24:14.245 "config": [] 00:24:14.245 } 00:24:14.245 ] 00:24:14.245 }' 00:24:14.245 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.502 [2024-10-14 13:35:06.124499] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:14.502 [2024-10-14 13:35:06.124586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279926 ] 00:24:14.502 [2024-10-14 13:35:06.182208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.502 [2024-10-14 13:35:06.227166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.760 [2024-10-14 13:35:06.400023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.760 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.760 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:14.760 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.760 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:15.018 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.018 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.276 Running I/O for 1 seconds... 00:24:16.210 3220.00 IOPS, 12.58 MiB/s 00:24:16.210 Latency(us) 00:24:16.210 [2024-10-14T11:35:08.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.210 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:16.210 Verification LBA range: start 0x0 length 0x2000 00:24:16.210 nvme0n1 : 1.02 3284.77 12.83 0.00 0.00 38614.14 6092.42 35729.26 00:24:16.210 [2024-10-14T11:35:08.063Z] =================================================================================================================== 00:24:16.210 [2024-10-14T11:35:08.063Z] Total : 3284.77 12.83 0.00 0.00 38614.14 6092.42 35729.26 00:24:16.210 { 00:24:16.210 "results": [ 00:24:16.210 { 00:24:16.210 "job": "nvme0n1", 00:24:16.210 "core_mask": "0x2", 00:24:16.210 "workload": "verify", 00:24:16.210 "status": "finished", 00:24:16.210 "verify_range": { 00:24:16.210 "start": 0, 00:24:16.210 "length": 8192 00:24:16.210 }, 00:24:16.210 "queue_depth": 128, 00:24:16.210 "io_size": 4096, 00:24:16.210 "runtime": 1.019248, 00:24:16.210 "iops": 3284.7746573944714, 00:24:16.210 "mibps": 12.831151005447154, 00:24:16.210 "io_failed": 0, 00:24:16.210 "io_timeout": 0, 00:24:16.210 "avg_latency_us": 38614.14237444135, 00:24:16.210 "min_latency_us": 6092.420740740741, 00:24:16.210 "max_latency_us": 35729.2562962963 00:24:16.210 } 00:24:16.210 ], 00:24:16.210 "core_count": 1 00:24:16.210 } 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:16.210 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:16.210 nvmf_trace.0 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 279926 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279926 ']' 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279926 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279926 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279926' 00:24:16.210 killing process with pid 279926 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279926 00:24:16.210 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.210 00:24:16.210 Latency(us) 00:24:16.210 [2024-10-14T11:35:08.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.210 [2024-10-14T11:35:08.063Z] =================================================================================================================== 00:24:16.210 [2024-10-14T11:35:08.063Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.210 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279926 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.468 rmmod nvme_tcp 00:24:16.468 rmmod nvme_fabrics 00:24:16.468 rmmod nvme_keyring 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 279775 ']' 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 279775 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 279775 ']' 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 279775 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.468 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 279775 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 279775' 00:24:16.727 killing process with pid 279775 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 279775 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 279775 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.727 13:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pSPuAl0Z8f /tmp/tmp.8w7AwmdtKz /tmp/tmp.BLZfvXNnZ9 00:24:19.267 00:24:19.267 real 1m21.713s 00:24:19.267 user 2m14.331s 00:24:19.267 sys 0m25.717s 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.267 ************************************ 00:24:19.267 END TEST nvmf_tls 00:24:19.267 ************************************ 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.267 ************************************ 00:24:19.267 START TEST nvmf_fips 00:24:19.267 ************************************ 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:19.267 * Looking for test storage... 00:24:19.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:19.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.267 --rc genhtml_branch_coverage=1 00:24:19.267 --rc genhtml_function_coverage=1 00:24:19.267 --rc genhtml_legend=1 00:24:19.267 --rc geninfo_all_blocks=1 00:24:19.267 --rc geninfo_unexecuted_blocks=1 00:24:19.267 00:24:19.267 ' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:19.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.267 --rc genhtml_branch_coverage=1 00:24:19.267 --rc genhtml_function_coverage=1 00:24:19.267 --rc genhtml_legend=1 00:24:19.267 --rc geninfo_all_blocks=1 00:24:19.267 --rc geninfo_unexecuted_blocks=1 00:24:19.267 00:24:19.267 ' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:19.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.267 --rc genhtml_branch_coverage=1 00:24:19.267 --rc genhtml_function_coverage=1 00:24:19.267 --rc genhtml_legend=1 00:24:19.267 --rc geninfo_all_blocks=1 00:24:19.267 --rc geninfo_unexecuted_blocks=1 00:24:19.267 00:24:19.267 ' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:19.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.267 --rc genhtml_branch_coverage=1 00:24:19.267 --rc genhtml_function_coverage=1 00:24:19.267 --rc genhtml_legend=1 00:24:19.267 --rc geninfo_all_blocks=1 00:24:19.267 --rc geninfo_unexecuted_blocks=1 00:24:19.267 00:24:19.267 ' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.267 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:19.268 13:35:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:19.268 Error setting digest 00:24:19.268 4022C005747F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:19.268 4022C005747F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.268 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.269 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.269 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:19.269 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:19.269 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.269 13:35:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:21.967 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:24:21.968 00:24:21.968 --- 10.0.0.2 ping statistics --- 00:24:21.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.968 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:21.968 00:24:21.968 --- 10.0.0.1 ping statistics --- 00:24:21.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.968 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:21.968 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=282283 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 282283 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 282283 ']' 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.969 [2024-10-14 13:35:13.455630] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:21.969 [2024-10-14 13:35:13.455720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.969 [2024-10-14 13:35:13.519593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.969 [2024-10-14 13:35:13.565815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.969 [2024-10-14 13:35:13.565867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.969 [2024-10-14 13:35:13.565895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.969 [2024-10-14 13:35:13.565907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.969 [2024-10-14 13:35:13.565916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.969 [2024-10-14 13:35:13.566513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.VDo 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.VDo 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.VDo 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.VDo 00:24:21.969 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.258 [2024-10-14 13:35:14.005628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.258 [2024-10-14 13:35:14.021643] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.258 [2024-10-14 13:35:14.021864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.258 malloc0 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=282324 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 282324 /var/tmp/bdevperf.sock 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 282324 ']' 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.258 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:22.550 [2024-10-14 13:35:14.155293] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:22.550 [2024-10-14 13:35:14.155375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282324 ] 00:24:22.550 [2024-10-14 13:35:14.213987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.550 [2024-10-14 13:35:14.259353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.550 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.550 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:22.550 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.VDo 00:24:22.857 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.115 [2024-10-14 13:35:14.880872] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.115 TLSTESTn1 00:24:23.115 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.372 Running I/O for 10 seconds... 00:24:25.678 3186.00 IOPS, 12.45 MiB/s [2024-10-14T11:35:18.097Z] 3244.00 IOPS, 12.67 MiB/s [2024-10-14T11:35:19.470Z] 3254.33 IOPS, 12.71 MiB/s [2024-10-14T11:35:20.403Z] 3300.50 IOPS, 12.89 MiB/s [2024-10-14T11:35:21.338Z] 3300.80 IOPS, 12.89 MiB/s [2024-10-14T11:35:22.271Z] 3313.00 IOPS, 12.94 MiB/s [2024-10-14T11:35:23.204Z] 3301.43 IOPS, 12.90 MiB/s [2024-10-14T11:35:24.138Z] 3318.25 IOPS, 12.96 MiB/s [2024-10-14T11:35:25.512Z] 3310.78 IOPS, 12.93 MiB/s [2024-10-14T11:35:25.512Z] 3310.60 IOPS, 12.93 MiB/s 00:24:33.659 Latency(us) 00:24:33.659 [2024-10-14T11:35:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.659 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.659 Verification LBA range: start 0x0 length 0x2000 00:24:33.659 TLSTESTn1 : 10.02 3315.70 12.95 0.00 0.00 38535.25 10145.94 54370.61 00:24:33.659 [2024-10-14T11:35:25.512Z] =================================================================================================================== 00:24:33.659 [2024-10-14T11:35:25.512Z] Total : 3315.70 12.95 0.00 0.00 38535.25 10145.94 54370.61 00:24:33.659 { 00:24:33.659 "results": [ 00:24:33.659 { 00:24:33.659 "job": "TLSTESTn1", 00:24:33.659 "core_mask": "0x4", 00:24:33.659 "workload": "verify", 00:24:33.659 "status": "finished", 00:24:33.659 "verify_range": { 00:24:33.659 "start": 0, 00:24:33.659 "length": 8192 00:24:33.659 }, 00:24:33.659 "queue_depth": 128, 00:24:33.659 "io_size": 4096, 00:24:33.659 "runtime": 10.022934, 00:24:33.659 "iops": 3315.6957832906014, 00:24:33.659 "mibps": 12.951936653478912, 00:24:33.659 "io_failed": 0, 00:24:33.659 "io_timeout": 0, 00:24:33.659 "avg_latency_us": 38535.24556595352, 00:24:33.659 "min_latency_us": 10145.943703703704, 00:24:33.659 "max_latency_us": 54370.607407407406 00:24:33.659 } 00:24:33.659 ], 00:24:33.659 "core_count": 1 00:24:33.659 } 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:33.659 nvmf_trace.0 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 282324 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 282324 ']' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 282324 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282324 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282324' 00:24:33.659 killing process with pid 282324 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 282324 00:24:33.659 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.659 00:24:33.659 Latency(us) 00:24:33.659 [2024-10-14T11:35:25.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.659 [2024-10-14T11:35:25.512Z] =================================================================================================================== 00:24:33.659 [2024-10-14T11:35:25.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 282324 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.659 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.659 rmmod nvme_tcp 00:24:33.659 rmmod nvme_fabrics 00:24:33.659 rmmod nvme_keyring 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 282283 ']' 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 282283 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 282283 ']' 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 282283 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 282283 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 282283' 00:24:33.917 killing process with pid 282283 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 282283 00:24:33.917 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 282283 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:34.176 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.177 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.177 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.177 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.177 13:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.VDo 00:24:36.084 00:24:36.084 real 0m17.163s 00:24:36.084 user 0m22.925s 00:24:36.084 sys 0m5.199s 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:36.084 ************************************ 00:24:36.084 END TEST nvmf_fips 00:24:36.084 ************************************ 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.084 ************************************ 00:24:36.084 START TEST nvmf_control_msg_list 00:24:36.084 ************************************ 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:36.084 * Looking for test storage... 00:24:36.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:36.084 13:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.341 --rc genhtml_branch_coverage=1 00:24:36.341 --rc genhtml_function_coverage=1 00:24:36.341 --rc genhtml_legend=1 00:24:36.341 --rc geninfo_all_blocks=1 00:24:36.341 --rc geninfo_unexecuted_blocks=1 00:24:36.341 00:24:36.341 ' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.341 --rc genhtml_branch_coverage=1 00:24:36.341 --rc genhtml_function_coverage=1 00:24:36.341 --rc genhtml_legend=1 00:24:36.341 --rc geninfo_all_blocks=1 00:24:36.341 --rc geninfo_unexecuted_blocks=1 00:24:36.341 00:24:36.341 ' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.341 --rc genhtml_branch_coverage=1 00:24:36.341 --rc genhtml_function_coverage=1 00:24:36.341 --rc genhtml_legend=1 00:24:36.341 --rc geninfo_all_blocks=1 00:24:36.341 --rc geninfo_unexecuted_blocks=1 00:24:36.341 00:24:36.341 ' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.341 --rc genhtml_branch_coverage=1 00:24:36.341 --rc genhtml_function_coverage=1 00:24:36.341 --rc genhtml_legend=1 00:24:36.341 --rc geninfo_all_blocks=1 00:24:36.341 --rc geninfo_unexecuted_blocks=1 00:24:36.341 00:24:36.341 ' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.341 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.342 13:35:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.872 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:38.873 00:24:38.873 --- 10.0.0.2 ping statistics --- 00:24:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.873 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:38.873 00:24:38.873 --- 10.0.0.1 ping statistics --- 00:24:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.873 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.873 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=285589 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 285589 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 285589 ']' 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 [2024-10-14 13:35:30.340255] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:38.874 [2024-10-14 13:35:30.340347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.874 [2024-10-14 13:35:30.408444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.874 [2024-10-14 13:35:30.455229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.874 [2024-10-14 13:35:30.455292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.874 [2024-10-14 13:35:30.455317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.874 [2024-10-14 13:35:30.455329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.874 [2024-10-14 13:35:30.455340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.874 [2024-10-14 13:35:30.455988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 [2024-10-14 13:35:30.601575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 Malloc0 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:38.874 [2024-10-14 13:35:30.641389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=285723 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=285724 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=285725 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 285723 00:24:38.874 13:35:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.874 [2024-10-14 13:35:30.700322] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:38.874 [2024-10-14 13:35:30.700698] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:38.874 [2024-10-14 13:35:30.700970] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:40.247 Initializing NVMe Controllers 00:24:40.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:40.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:40.247 Initialization complete. Launching workers. 00:24:40.247 ======================================================== 00:24:40.247 Latency(us) 00:24:40.247 Device Information : IOPS MiB/s Average min max 00:24:40.247 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4124.00 16.11 242.11 155.18 510.68 00:24:40.247 ======================================================== 00:24:40.247 Total : 4124.00 16.11 242.11 155.18 510.68 00:24:40.247 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 285724 00:24:40.247 Initializing NVMe Controllers 00:24:40.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:40.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:40.247 Initialization complete. Launching workers. 00:24:40.247 ======================================================== 00:24:40.247 Latency(us) 00:24:40.247 Device Information : IOPS MiB/s Average min max 00:24:40.247 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 26.00 0.10 39325.15 306.78 41017.74 00:24:40.247 ======================================================== 00:24:40.247 Total : 26.00 0.10 39325.15 306.78 41017.74 00:24:40.247 00:24:40.247 Initializing NVMe Controllers 00:24:40.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:40.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:40.247 Initialization complete. Launching workers. 00:24:40.247 ======================================================== 00:24:40.247 Latency(us) 00:24:40.247 Device Information : IOPS MiB/s Average min max 00:24:40.247 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3811.00 14.89 261.97 185.60 573.70 00:24:40.247 ======================================================== 00:24:40.247 Total : 3811.00 14.89 261.97 185.60 573.70 00:24:40.247 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 285725 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.247 rmmod nvme_tcp 00:24:40.247 rmmod nvme_fabrics 00:24:40.247 rmmod nvme_keyring 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 285589 ']' 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 285589 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 285589 ']' 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 285589 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 285589 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 285589' 00:24:40.247 killing process with pid 285589 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 285589 00:24:40.247 13:35:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 285589 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.506 13:35:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.411 00:24:42.411 real 0m6.358s 00:24:42.411 user 0m5.476s 00:24:42.411 sys 0m2.729s 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:42.411 ************************************ 00:24:42.411 END TEST nvmf_control_msg_list 00:24:42.411 ************************************ 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.411 13:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:42.671 ************************************ 00:24:42.671 START TEST nvmf_wait_for_buf 00:24:42.671 ************************************ 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:42.671 * Looking for test storage... 00:24:42.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.671 --rc genhtml_branch_coverage=1 00:24:42.671 --rc genhtml_function_coverage=1 00:24:42.671 --rc genhtml_legend=1 00:24:42.671 --rc geninfo_all_blocks=1 00:24:42.671 --rc geninfo_unexecuted_blocks=1 00:24:42.671 00:24:42.671 ' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.671 --rc genhtml_branch_coverage=1 00:24:42.671 --rc genhtml_function_coverage=1 00:24:42.671 --rc genhtml_legend=1 00:24:42.671 --rc geninfo_all_blocks=1 00:24:42.671 --rc geninfo_unexecuted_blocks=1 00:24:42.671 00:24:42.671 ' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.671 --rc genhtml_branch_coverage=1 00:24:42.671 --rc genhtml_function_coverage=1 00:24:42.671 --rc genhtml_legend=1 00:24:42.671 --rc geninfo_all_blocks=1 00:24:42.671 --rc geninfo_unexecuted_blocks=1 00:24:42.671 00:24:42.671 ' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:42.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.671 --rc genhtml_branch_coverage=1 00:24:42.671 --rc genhtml_function_coverage=1 00:24:42.671 --rc genhtml_legend=1 00:24:42.671 --rc geninfo_all_blocks=1 00:24:42.671 --rc geninfo_unexecuted_blocks=1 00:24:42.671 00:24:42.671 ' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.671 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.672 13:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.207 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.207 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.207 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:45.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:45.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:45.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:45.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:24:45.208 00:24:45.208 --- 10.0.0.2 ping statistics --- 00:24:45.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.208 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:24:45.208 00:24:45.208 --- 10.0.0.1 ping statistics --- 00:24:45.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.208 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:45.208 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=287802 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 287802 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 287802 ']' 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.209 [2024-10-14 13:35:36.736937] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:24:45.209 [2024-10-14 13:35:36.737014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.209 [2024-10-14 13:35:36.800467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.209 [2024-10-14 13:35:36.845336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.209 [2024-10-14 13:35:36.845385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.209 [2024-10-14 13:35:36.845408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.209 [2024-10-14 13:35:36.845433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.209 [2024-10-14 13:35:36.845442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.209 [2024-10-14 13:35:36.845962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.209 13:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.209 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.209 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:45.209 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.209 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 Malloc0 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 [2024-10-14 13:35:37.093996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:45.467 [2024-10-14 13:35:37.118206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.467 13:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.467 [2024-10-14 13:35:37.189251] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.840 Initializing NVMe Controllers 00:24:46.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:46.840 Initialization complete. Launching workers. 00:24:46.840 ======================================================== 00:24:46.840 Latency(us) 00:24:46.840 Device Information : IOPS MiB/s Average min max 00:24:46.840 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.57 15.45 33529.70 7997.60 71824.12 00:24:46.840 ======================================================== 00:24:46.840 Total : 123.57 15.45 33529.70 7997.60 71824.12 00:24:46.840 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:47.097 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.098 rmmod nvme_tcp 00:24:47.098 rmmod nvme_fabrics 00:24:47.098 rmmod nvme_keyring 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 287802 ']' 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 287802 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 287802 ']' 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 287802 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 287802 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 287802' 00:24:47.098 killing process with pid 287802 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 287802 00:24:47.098 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 287802 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:47.357 13:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:24:47.357 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.357 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.357 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.357 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.357 13:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.265 00:24:49.265 real 0m6.760s 00:24:49.265 user 0m3.256s 00:24:49.265 sys 0m1.948s 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:49.265 ************************************ 00:24:49.265 END TEST nvmf_wait_for_buf 00:24:49.265 ************************************ 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.265 ************************************ 00:24:49.265 START TEST nvmf_fuzz 00:24:49.265 ************************************ 00:24:49.265 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:49.523 * Looking for test storage... 00:24:49.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:49.523 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.524 --rc genhtml_branch_coverage=1 00:24:49.524 --rc genhtml_function_coverage=1 00:24:49.524 --rc genhtml_legend=1 00:24:49.524 --rc geninfo_all_blocks=1 00:24:49.524 --rc geninfo_unexecuted_blocks=1 00:24:49.524 00:24:49.524 ' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.524 --rc genhtml_branch_coverage=1 00:24:49.524 --rc genhtml_function_coverage=1 00:24:49.524 --rc genhtml_legend=1 00:24:49.524 --rc geninfo_all_blocks=1 00:24:49.524 --rc geninfo_unexecuted_blocks=1 00:24:49.524 00:24:49.524 ' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.524 --rc genhtml_branch_coverage=1 00:24:49.524 --rc genhtml_function_coverage=1 00:24:49.524 --rc genhtml_legend=1 00:24:49.524 --rc geninfo_all_blocks=1 00:24:49.524 --rc geninfo_unexecuted_blocks=1 00:24:49.524 00:24:49.524 ' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.524 --rc genhtml_branch_coverage=1 00:24:49.524 --rc genhtml_function_coverage=1 00:24:49.524 --rc genhtml_legend=1 00:24:49.524 --rc geninfo_all_blocks=1 00:24:49.524 --rc geninfo_unexecuted_blocks=1 00:24:49.524 00:24:49.524 ' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.524 13:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:52.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:52.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:52.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:52.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.056 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:24:52.057 00:24:52.057 --- 10.0.0.2 ping statistics --- 00:24:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.057 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:24:52.057 00:24:52.057 --- 10.0.0.1 ping statistics --- 00:24:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.057 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=290017 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 290017 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 290017 ']' 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.057 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.315 Malloc0 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:52.315 13:35:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:24.372 Fuzzing completed. Shutting down the fuzz application 00:25:24.372 00:25:24.372 Dumping successful admin opcodes: 00:25:24.372 8, 9, 10, 24, 00:25:24.372 Dumping successful io opcodes: 00:25:24.372 0, 9, 00:25:24.372 NS: 0x2000008eff00 I/O qp, Total commands completed: 529801, total successful commands: 3074, random_seed: 2587458176 00:25:24.372 NS: 0x2000008eff00 admin qp, Total commands completed: 60160, total successful commands: 475, random_seed: 869329792 00:25:24.372 13:36:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:24.372 Fuzzing completed. Shutting down the fuzz application 00:25:24.372 00:25:24.372 Dumping successful admin opcodes: 00:25:24.372 24, 00:25:24.372 Dumping successful io opcodes: 00:25:24.372 00:25:24.372 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2310212816 00:25:24.372 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2310326462 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.372 13:36:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.372 rmmod nvme_tcp 00:25:24.372 rmmod nvme_fabrics 00:25:24.372 rmmod nvme_keyring 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 290017 ']' 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 290017 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 290017 ']' 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 290017 00:25:24.372 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 290017 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 290017' 00:25:24.373 killing process with pid 290017 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 290017 00:25:24.373 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 290017 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.631 13:36:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.535 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.535 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:26.535 00:25:26.535 real 0m37.286s 00:25:26.535 user 0m51.689s 00:25:26.535 sys 0m14.234s 00:25:26.535 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.535 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.535 ************************************ 00:25:26.535 END TEST nvmf_fuzz 00:25:26.535 ************************************ 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.794 ************************************ 00:25:26.794 START TEST nvmf_multiconnection 00:25:26.794 ************************************ 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.794 * Looking for test storage... 00:25:26.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:26.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.794 --rc genhtml_branch_coverage=1 00:25:26.794 --rc genhtml_function_coverage=1 00:25:26.794 --rc genhtml_legend=1 00:25:26.794 --rc geninfo_all_blocks=1 00:25:26.794 --rc geninfo_unexecuted_blocks=1 00:25:26.794 00:25:26.794 ' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:26.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.794 --rc genhtml_branch_coverage=1 00:25:26.794 --rc genhtml_function_coverage=1 00:25:26.794 --rc genhtml_legend=1 00:25:26.794 --rc geninfo_all_blocks=1 00:25:26.794 --rc geninfo_unexecuted_blocks=1 00:25:26.794 00:25:26.794 ' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:26.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.794 --rc genhtml_branch_coverage=1 00:25:26.794 --rc genhtml_function_coverage=1 00:25:26.794 --rc genhtml_legend=1 00:25:26.794 --rc geninfo_all_blocks=1 00:25:26.794 --rc geninfo_unexecuted_blocks=1 00:25:26.794 00:25:26.794 ' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:26.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.794 --rc genhtml_branch_coverage=1 00:25:26.794 --rc genhtml_function_coverage=1 00:25:26.794 --rc genhtml_legend=1 00:25:26.794 --rc geninfo_all_blocks=1 00:25:26.794 --rc geninfo_unexecuted_blocks=1 00:25:26.794 00:25:26.794 ' 00:25:26.794 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.795 13:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.332 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:29.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:29.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:29.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:29.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:29.333 00:25:29.333 --- 10.0.0.2 ping statistics --- 00:25:29.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.333 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:29.333 00:25:29.333 --- 10.0.0.1 ping statistics --- 00:25:29.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.333 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=296369 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 296369 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 296369 ']' 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.333 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.334 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.334 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.334 13:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.334 [2024-10-14 13:36:21.013573] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:25:29.334 [2024-10-14 13:36:21.013652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.334 [2024-10-14 13:36:21.079952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.334 [2024-10-14 13:36:21.127608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.334 [2024-10-14 13:36:21.127676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.334 [2024-10-14 13:36:21.127699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.334 [2024-10-14 13:36:21.127710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.334 [2024-10-14 13:36:21.127719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.334 [2024-10-14 13:36:21.129269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.334 [2024-10-14 13:36:21.129330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.334 [2024-10-14 13:36:21.129377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.334 [2024-10-14 13:36:21.129380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 [2024-10-14 13:36:21.271696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 Malloc1 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 [2024-10-14 13:36:21.347031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 Malloc2 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.592 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 Malloc3 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.593 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 Malloc4 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 Malloc5 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 Malloc6 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 Malloc7 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 Malloc8 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.852 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.853 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.853 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:29.853 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.853 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 Malloc9 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 Malloc10 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 Malloc11 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.111 13:36:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:30.676 13:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:30.676 13:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:30.676 13:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.676 13:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:30.676 13:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.203 13:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:33.460 13:36:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:33.460 13:36:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.460 13:36:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.460 13:36:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:33.460 13:36:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.358 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:36.291 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:36.291 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:36.291 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.291 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:36.291 13:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.189 13:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:38.756 13:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:38.756 13:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:38.756 13:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.756 13:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:38.756 13:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.652 13:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:41.586 13:36:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:41.586 13:36:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:41.586 13:36:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.586 13:36:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:41.586 13:36:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.482 13:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:44.414 13:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:44.414 13:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:44.414 13:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.414 13:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:44.414 13:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:46.941 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:46.941 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.942 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:47.199 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:47.199 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:47.199 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.199 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:47.199 13:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.096 13:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:50.028 13:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:50.028 13:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:50.028 13:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.028 13:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:50.028 13:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.554 13:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:53.120 13:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:53.120 13:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.120 13:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.120 13:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.120 13:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.017 13:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:55.950 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:55.950 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:55.950 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.950 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:55.950 13:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.848 13:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:58.781 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:58.781 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:58.781 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.781 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:58.781 13:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.696 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.696 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.696 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:00.696 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.697 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.697 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.697 13:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:00.697 [global] 00:26:00.697 thread=1 00:26:00.697 invalidate=1 00:26:00.697 rw=read 00:26:00.697 time_based=1 00:26:00.697 runtime=10 00:26:00.697 ioengine=libaio 00:26:00.697 direct=1 00:26:00.697 bs=262144 00:26:00.697 iodepth=64 00:26:00.697 norandommap=1 00:26:00.697 numjobs=1 00:26:00.697 00:26:00.697 [job0] 00:26:00.697 filename=/dev/nvme0n1 00:26:00.697 [job1] 00:26:00.697 filename=/dev/nvme10n1 00:26:00.697 [job2] 00:26:00.697 filename=/dev/nvme1n1 00:26:00.697 [job3] 00:26:00.697 filename=/dev/nvme2n1 00:26:00.697 [job4] 00:26:00.697 filename=/dev/nvme3n1 00:26:00.954 [job5] 00:26:00.954 filename=/dev/nvme4n1 00:26:00.954 [job6] 00:26:00.954 filename=/dev/nvme5n1 00:26:00.954 [job7] 00:26:00.954 filename=/dev/nvme6n1 00:26:00.954 [job8] 00:26:00.954 filename=/dev/nvme7n1 00:26:00.954 [job9] 00:26:00.954 filename=/dev/nvme8n1 00:26:00.954 [job10] 00:26:00.954 filename=/dev/nvme9n1 00:26:00.954 Could not set queue depth (nvme0n1) 00:26:00.954 Could not set queue depth (nvme10n1) 00:26:00.954 Could not set queue depth (nvme1n1) 00:26:00.954 Could not set queue depth (nvme2n1) 00:26:00.954 Could not set queue depth (nvme3n1) 00:26:00.954 Could not set queue depth (nvme4n1) 00:26:00.954 Could not set queue depth (nvme5n1) 00:26:00.954 Could not set queue depth (nvme6n1) 00:26:00.954 Could not set queue depth (nvme7n1) 00:26:00.954 Could not set queue depth (nvme8n1) 00:26:00.954 Could not set queue depth (nvme9n1) 00:26:01.212 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.212 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.213 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.213 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.213 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.213 fio-3.35 00:26:01.213 Starting 11 threads 00:26:13.410 00:26:13.410 job0: (groupid=0, jobs=1): err= 0: pid=300616: Mon Oct 14 13:37:03 2024 00:26:13.410 read: IOPS=123, BW=31.0MiB/s (32.5MB/s)(315MiB/10169msec) 00:26:13.410 slat (usec): min=9, max=308787, avg=4748.28, stdev=21991.16 00:26:13.410 clat (msec): min=13, max=935, avg=511.71, stdev=182.04 00:26:13.410 lat (msec): min=13, max=935, avg=516.45, stdev=185.20 00:26:13.410 clat percentiles (msec): 00:26:13.410 | 1.00th=[ 19], 5.00th=[ 178], 10.00th=[ 262], 20.00th=[ 338], 00:26:13.410 | 30.00th=[ 418], 40.00th=[ 498], 50.00th=[ 558], 60.00th=[ 609], 00:26:13.410 | 70.00th=[ 642], 80.00th=[ 667], 90.00th=[ 709], 95.00th=[ 726], 00:26:13.410 | 99.00th=[ 776], 99.50th=[ 785], 99.90th=[ 860], 99.95th=[ 936], 00:26:13.410 | 99.99th=[ 936] 00:26:13.410 bw ( KiB/s): min=15872, max=51200, per=4.06%, avg=30589.95, stdev=10083.87, samples=20 00:26:13.410 iops : min= 62, max= 200, avg=119.40, stdev=39.39, samples=20 00:26:13.410 lat (msec) : 20=1.03%, 50=0.71%, 100=1.59%, 250=5.56%, 500=31.69% 00:26:13.410 lat (msec) : 750=56.24%, 1000=3.18% 00:26:13.410 cpu : usr=0.05%, sys=0.39%, ctx=227, majf=0, minf=4097 00:26:13.410 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:26:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.410 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.410 issued rwts: total=1259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.410 job1: (groupid=0, jobs=1): err= 0: pid=300619: Mon Oct 14 13:37:03 2024 00:26:13.410 read: IOPS=201, BW=50.4MiB/s (52.8MB/s)(512MiB/10169msec) 00:26:13.410 slat (usec): min=13, max=256982, avg=4888.56, stdev=18562.99 00:26:13.410 clat (msec): min=21, max=905, avg=312.36, stdev=263.00 00:26:13.410 lat (msec): min=21, max=905, avg=317.25, stdev=267.25 00:26:13.410 clat percentiles (msec): 00:26:13.410 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 39], 00:26:13.410 | 30.00th=[ 56], 40.00th=[ 188], 50.00th=[ 245], 60.00th=[ 317], 00:26:13.410 | 70.00th=[ 527], 80.00th=[ 617], 90.00th=[ 701], 95.00th=[ 751], 00:26:13.410 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 835], 99.95th=[ 844], 00:26:13.410 | 99.99th=[ 902] 00:26:13.410 bw ( KiB/s): min=16896, max=359424, per=6.75%, avg=50820.55, stdev=74911.31, samples=20 00:26:13.411 iops : min= 66, max= 1404, avg=198.50, stdev=292.62, samples=20 00:26:13.411 lat (msec) : 50=28.79%, 100=8.20%, 250=14.15%, 500=16.30%, 750=27.72% 00:26:13.411 lat (msec) : 1000=4.83% 00:26:13.411 cpu : usr=0.10%, sys=0.90%, ctx=313, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job2: (groupid=0, jobs=1): err= 0: pid=300620: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=195, BW=48.8MiB/s (51.2MB/s)(497MiB/10180msec) 00:26:13.411 slat (usec): min=13, max=420819, avg=4858.51, stdev=22274.53 00:26:13.411 clat (msec): min=2, max=1021, avg=322.37, stdev=192.10 00:26:13.411 lat (msec): min=2, max=1021, avg=327.23, stdev=195.12 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 184], 00:26:13.411 | 30.00th=[ 245], 40.00th=[ 279], 50.00th=[ 313], 60.00th=[ 359], 00:26:13.411 | 70.00th=[ 405], 80.00th=[ 472], 90.00th=[ 542], 95.00th=[ 693], 00:26:13.411 | 99.00th=[ 835], 99.50th=[ 877], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:13.411 | 99.99th=[ 1020] 00:26:13.411 bw ( KiB/s): min=12288, max=168448, per=6.54%, avg=49283.00, stdev=32905.19, samples=20 00:26:13.411 iops : min= 48, max= 658, avg=192.50, stdev=128.54, samples=20 00:26:13.411 lat (msec) : 4=0.65%, 10=2.41%, 20=1.91%, 50=7.84%, 100=4.58% 00:26:13.411 lat (msec) : 250=13.02%, 500=56.11%, 750=11.01%, 1000=2.31%, 2000=0.15% 00:26:13.411 cpu : usr=0.19%, sys=0.79%, ctx=411, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job3: (groupid=0, jobs=1): err= 0: pid=300621: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=168, BW=42.2MiB/s (44.3MB/s)(430MiB/10182msec) 00:26:13.411 slat (usec): min=9, max=339971, avg=5469.63, stdev=20922.00 00:26:13.411 clat (msec): min=22, max=878, avg=373.26, stdev=148.75 00:26:13.411 lat (msec): min=22, max=960, avg=378.73, stdev=150.62 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 29], 5.00th=[ 205], 10.00th=[ 234], 20.00th=[ 271], 00:26:13.411 | 30.00th=[ 300], 40.00th=[ 317], 50.00th=[ 338], 60.00th=[ 376], 00:26:13.411 | 70.00th=[ 409], 80.00th=[ 472], 90.00th=[ 600], 95.00th=[ 667], 00:26:13.411 | 99.00th=[ 827], 99.50th=[ 827], 99.90th=[ 877], 99.95th=[ 877], 00:26:13.411 | 99.99th=[ 877] 00:26:13.411 bw ( KiB/s): min=18944, max=64512, per=5.62%, avg=42362.75, stdev=12391.40, samples=20 00:26:13.411 iops : min= 74, max= 252, avg=165.40, stdev=48.52, samples=20 00:26:13.411 lat (msec) : 50=2.33%, 100=1.40%, 250=10.06%, 500=69.11%, 750=15.36% 00:26:13.411 lat (msec) : 1000=1.75% 00:26:13.411 cpu : usr=0.07%, sys=0.68%, ctx=222, majf=0, minf=3721 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job4: (groupid=0, jobs=1): err= 0: pid=300624: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=152, BW=38.2MiB/s (40.0MB/s)(388MiB/10169msec) 00:26:13.411 slat (usec): min=11, max=328498, avg=6284.56, stdev=23881.54 00:26:13.411 clat (usec): min=1604, max=1040.0k, avg=412546.22, stdev=239251.63 00:26:13.411 lat (usec): min=1635, max=1040.1k, avg=418830.78, stdev=243470.96 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 108], 20.00th=[ 222], 00:26:13.411 | 30.00th=[ 245], 40.00th=[ 271], 50.00th=[ 368], 60.00th=[ 527], 00:26:13.411 | 70.00th=[ 600], 80.00th=[ 667], 90.00th=[ 718], 95.00th=[ 751], 00:26:13.411 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 1028], 99.95th=[ 1036], 00:26:13.411 | 99.99th=[ 1036] 00:26:13.411 bw ( KiB/s): min=15872, max=107520, per=5.06%, avg=38089.55, stdev=24086.79, samples=20 00:26:13.411 iops : min= 62, max= 420, avg=148.70, stdev=93.98, samples=20 00:26:13.411 lat (msec) : 2=0.13%, 4=0.39%, 10=2.26%, 20=6.06%, 50=0.64% 00:26:13.411 lat (msec) : 100=0.19%, 250=22.29%, 500=24.29%, 750=38.40%, 1000=5.22% 00:26:13.411 lat (msec) : 2000=0.13% 00:26:13.411 cpu : usr=0.08%, sys=0.66%, ctx=357, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job5: (groupid=0, jobs=1): err= 0: pid=300627: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=173, BW=43.3MiB/s (45.4MB/s)(441MiB/10179msec) 00:26:13.411 slat (usec): min=9, max=393926, avg=4996.23, stdev=22728.93 00:26:13.411 clat (usec): min=1369, max=870091, avg=364117.12, stdev=193602.75 00:26:13.411 lat (usec): min=1542, max=1029.7k, avg=369113.35, stdev=196531.36 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 146], 20.00th=[ 255], 00:26:13.411 | 30.00th=[ 284], 40.00th=[ 309], 50.00th=[ 330], 60.00th=[ 359], 00:26:13.411 | 70.00th=[ 409], 80.00th=[ 472], 90.00th=[ 667], 95.00th=[ 743], 00:26:13.411 | 99.00th=[ 818], 99.50th=[ 827], 99.90th=[ 869], 99.95th=[ 869], 00:26:13.411 | 99.99th=[ 869] 00:26:13.411 bw ( KiB/s): min=17408, max=72704, per=5.77%, avg=43489.05, stdev=14941.51, samples=20 00:26:13.411 iops : min= 68, max= 284, avg=169.80, stdev=58.46, samples=20 00:26:13.411 lat (msec) : 2=0.23%, 4=0.34%, 10=7.88%, 50=0.06%, 100=0.68% 00:26:13.411 lat (msec) : 250=9.59%, 500=62.56%, 750=14.07%, 1000=4.59% 00:26:13.411 cpu : usr=0.07%, sys=0.70%, ctx=332, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job6: (groupid=0, jobs=1): err= 0: pid=300628: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=180, BW=45.1MiB/s (47.3MB/s)(460MiB/10180msec) 00:26:13.411 slat (usec): min=12, max=347482, avg=4350.34, stdev=19014.40 00:26:13.411 clat (msec): min=15, max=857, avg=349.68, stdev=205.85 00:26:13.411 lat (msec): min=16, max=857, avg=354.03, stdev=209.02 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 20], 5.00th=[ 39], 10.00th=[ 50], 20.00th=[ 186], 00:26:13.411 | 30.00th=[ 247], 40.00th=[ 275], 50.00th=[ 309], 60.00th=[ 347], 00:26:13.411 | 70.00th=[ 405], 80.00th=[ 550], 90.00th=[ 667], 95.00th=[ 726], 00:26:13.411 | 99.00th=[ 818], 99.50th=[ 835], 99.90th=[ 860], 99.95th=[ 860], 00:26:13.411 | 99.99th=[ 860] 00:26:13.411 bw ( KiB/s): min=19456, max=95232, per=6.03%, avg=45403.90, stdev=23034.15, samples=20 00:26:13.411 iops : min= 76, max= 372, avg=177.30, stdev=89.96, samples=20 00:26:13.411 lat (msec) : 20=1.09%, 50=9.09%, 100=3.16%, 250=17.68%, 500=44.61% 00:26:13.411 lat (msec) : 750=21.22%, 1000=3.16% 00:26:13.411 cpu : usr=0.09%, sys=0.77%, ctx=340, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job7: (groupid=0, jobs=1): err= 0: pid=300629: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=1201, BW=300MiB/s (315MB/s)(3026MiB/10073msec) 00:26:13.411 slat (usec): min=12, max=85453, avg=806.29, stdev=2774.12 00:26:13.411 clat (msec): min=25, max=403, avg=52.42, stdev=35.09 00:26:13.411 lat (msec): min=26, max=403, avg=53.22, stdev=35.46 00:26:13.411 clat percentiles (msec): 00:26:13.411 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 35], 00:26:13.411 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 41], 00:26:13.411 | 70.00th=[ 47], 80.00th=[ 66], 90.00th=[ 97], 95.00th=[ 112], 00:26:13.411 | 99.00th=[ 174], 99.50th=[ 279], 99.90th=[ 376], 99.95th=[ 376], 00:26:13.411 | 99.99th=[ 388] 00:26:13.411 bw ( KiB/s): min=70656, max=487936, per=40.91%, avg=308125.20, stdev=125956.75, samples=20 00:26:13.411 iops : min= 276, max= 1906, avg=1203.55, stdev=492.06, samples=20 00:26:13.411 lat (msec) : 50=72.26%, 100=18.84%, 250=8.38%, 500=0.52% 00:26:13.411 cpu : usr=0.66%, sys=4.06%, ctx=1692, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=12102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job8: (groupid=0, jobs=1): err= 0: pid=300631: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=123, BW=31.0MiB/s (32.5MB/s)(315MiB/10172msec) 00:26:13.411 slat (usec): min=12, max=477778, avg=5794.20, stdev=26518.83 00:26:13.411 clat (usec): min=801, max=1097.1k, avg=510379.33, stdev=283705.39 00:26:13.411 lat (usec): min=828, max=1136.1k, avg=516173.53, stdev=287468.63 00:26:13.411 clat percentiles (usec): 00:26:13.411 | 1.00th=[ 1106], 5.00th=[ 1942], 10.00th=[ 8717], 00:26:13.411 | 20.00th=[ 204473], 30.00th=[ 354419], 40.00th=[ 541066], 00:26:13.411 | 50.00th=[ 599786], 60.00th=[ 650118], 70.00th=[ 692061], 00:26:13.411 | 80.00th=[ 725615], 90.00th=[ 801113], 95.00th=[ 884999], 00:26:13.411 | 99.00th=[1044382], 99.50th=[1044382], 99.90th=[1098908], 00:26:13.411 | 99.95th=[1098908], 99.99th=[1098908] 00:26:13.411 bw ( KiB/s): min= 5632, max=67960, per=4.06%, avg=30605.65, stdev=14940.28, samples=20 00:26:13.411 iops : min= 22, max= 265, avg=119.50, stdev=58.26, samples=20 00:26:13.411 lat (usec) : 1000=0.24% 00:26:13.411 lat (msec) : 2=4.76%, 4=2.06%, 10=5.24%, 20=0.40%, 50=0.48% 00:26:13.411 lat (msec) : 100=2.30%, 250=6.19%, 500=14.21%, 750=48.41%, 1000=13.73% 00:26:13.411 lat (msec) : 2000=1.98% 00:26:13.411 cpu : usr=0.10%, sys=0.48%, ctx=311, majf=0, minf=4097 00:26:13.411 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:26:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.411 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.411 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.411 job9: (groupid=0, jobs=1): err= 0: pid=300632: Mon Oct 14 13:37:03 2024 00:26:13.411 read: IOPS=310, BW=77.6MiB/s (81.4MB/s)(790MiB/10170msec) 00:26:13.411 slat (usec): min=8, max=475109, avg=1987.85, stdev=16899.74 00:26:13.411 clat (usec): min=1048, max=849517, avg=203929.59, stdev=195374.51 00:26:13.411 lat (usec): min=1067, max=1133.9k, avg=205917.44, stdev=197713.35 00:26:13.412 clat percentiles (msec): 00:26:13.412 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 34], 20.00th=[ 48], 00:26:13.412 | 30.00th=[ 73], 40.00th=[ 110], 50.00th=[ 136], 60.00th=[ 159], 00:26:13.412 | 70.00th=[ 241], 80.00th=[ 309], 90.00th=[ 575], 95.00th=[ 651], 00:26:13.412 | 99.00th=[ 751], 99.50th=[ 802], 99.90th=[ 827], 99.95th=[ 844], 00:26:13.412 | 99.99th=[ 852] 00:26:13.412 bw ( KiB/s): min=16896, max=231936, per=10.52%, avg=79213.90, stdev=50469.46, samples=20 00:26:13.412 iops : min= 66, max= 906, avg=309.40, stdev=197.15, samples=20 00:26:13.412 lat (msec) : 2=0.28%, 4=0.06%, 10=0.16%, 20=1.49%, 50=19.06% 00:26:13.412 lat (msec) : 100=17.10%, 250=32.93%, 500=15.90%, 750=11.81%, 1000=1.20% 00:26:13.412 cpu : usr=0.09%, sys=0.83%, ctx=617, majf=0, minf=4097 00:26:13.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.412 issued rwts: total=3158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.412 job10: (groupid=0, jobs=1): err= 0: pid=300633: Mon Oct 14 13:37:03 2024 00:26:13.412 read: IOPS=135, BW=33.9MiB/s (35.6MB/s)(347MiB/10223msec) 00:26:13.412 slat (usec): min=8, max=524120, avg=5101.17, stdev=27454.92 00:26:13.412 clat (msec): min=142, max=964, avg=466.19, stdev=200.30 00:26:13.412 lat (msec): min=142, max=1044, avg=471.29, stdev=203.21 00:26:13.412 clat percentiles (msec): 00:26:13.412 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 251], 00:26:13.412 | 30.00th=[ 317], 40.00th=[ 376], 50.00th=[ 477], 60.00th=[ 542], 00:26:13.412 | 70.00th=[ 609], 80.00th=[ 667], 90.00th=[ 709], 95.00th=[ 793], 00:26:13.412 | 99.00th=[ 885], 99.50th=[ 919], 99.90th=[ 927], 99.95th=[ 961], 00:26:13.412 | 99.99th=[ 961] 00:26:13.412 bw ( KiB/s): min=12288, max=67584, per=4.73%, avg=35645.68, stdev=14684.20, samples=19 00:26:13.412 iops : min= 48, max= 264, avg=139.16, stdev=57.41, samples=19 00:26:13.412 lat (msec) : 250=19.39%, 500=33.24%, 750=41.53%, 1000=5.84% 00:26:13.412 cpu : usr=0.05%, sys=0.40%, ctx=206, majf=0, minf=4097 00:26:13.412 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:26:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.412 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.412 issued rwts: total=1387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.412 00:26:13.412 Run status group 0 (all jobs): 00:26:13.412 READ: bw=735MiB/s (771MB/s), 31.0MiB/s-300MiB/s (32.5MB/s-315MB/s), io=7519MiB (7884MB), run=10073-10223msec 00:26:13.412 00:26:13.412 Disk stats (read/write): 00:26:13.412 nvme0n1: ios=2379/0, merge=0/0, ticks=1221564/0, in_queue=1221564, util=97.08% 00:26:13.412 nvme10n1: ios=3949/0, merge=0/0, ticks=1215909/0, in_queue=1215909, util=97.29% 00:26:13.412 nvme1n1: ios=3926/0, merge=0/0, ticks=1262509/0, in_queue=1262509, util=97.64% 00:26:13.412 nvme2n1: ios=3392/0, merge=0/0, ticks=1262982/0, in_queue=1262982, util=97.81% 00:26:13.412 nvme3n1: ios=2963/0, merge=0/0, ticks=1211936/0, in_queue=1211936, util=97.82% 00:26:13.412 nvme4n1: ios=3490/0, merge=0/0, ticks=1258490/0, in_queue=1258490, util=98.23% 00:26:13.412 nvme5n1: ios=3650/0, merge=0/0, ticks=1266119/0, in_queue=1266119, util=98.40% 00:26:13.412 nvme6n1: ios=23990/0, merge=0/0, ticks=1236901/0, in_queue=1236901, util=98.49% 00:26:13.412 nvme7n1: ios=2371/0, merge=0/0, ticks=1206640/0, in_queue=1206640, util=98.90% 00:26:13.412 nvme8n1: ios=6143/0, merge=0/0, ticks=1240261/0, in_queue=1240261, util=99.10% 00:26:13.412 nvme9n1: ios=2710/0, merge=0/0, ticks=1253790/0, in_queue=1253790, util=99.26% 00:26:13.412 13:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:13.412 [global] 00:26:13.412 thread=1 00:26:13.412 invalidate=1 00:26:13.412 rw=randwrite 00:26:13.412 time_based=1 00:26:13.412 runtime=10 00:26:13.412 ioengine=libaio 00:26:13.412 direct=1 00:26:13.412 bs=262144 00:26:13.412 iodepth=64 00:26:13.412 norandommap=1 00:26:13.412 numjobs=1 00:26:13.412 00:26:13.412 [job0] 00:26:13.412 filename=/dev/nvme0n1 00:26:13.412 [job1] 00:26:13.412 filename=/dev/nvme10n1 00:26:13.412 [job2] 00:26:13.412 filename=/dev/nvme1n1 00:26:13.412 [job3] 00:26:13.412 filename=/dev/nvme2n1 00:26:13.412 [job4] 00:26:13.412 filename=/dev/nvme3n1 00:26:13.412 [job5] 00:26:13.412 filename=/dev/nvme4n1 00:26:13.412 [job6] 00:26:13.412 filename=/dev/nvme5n1 00:26:13.412 [job7] 00:26:13.412 filename=/dev/nvme6n1 00:26:13.412 [job8] 00:26:13.412 filename=/dev/nvme7n1 00:26:13.412 [job9] 00:26:13.412 filename=/dev/nvme8n1 00:26:13.412 [job10] 00:26:13.412 filename=/dev/nvme9n1 00:26:13.412 Could not set queue depth (nvme0n1) 00:26:13.412 Could not set queue depth (nvme10n1) 00:26:13.412 Could not set queue depth (nvme1n1) 00:26:13.412 Could not set queue depth (nvme2n1) 00:26:13.412 Could not set queue depth (nvme3n1) 00:26:13.412 Could not set queue depth (nvme4n1) 00:26:13.412 Could not set queue depth (nvme5n1) 00:26:13.412 Could not set queue depth (nvme6n1) 00:26:13.412 Could not set queue depth (nvme7n1) 00:26:13.412 Could not set queue depth (nvme8n1) 00:26:13.412 Could not set queue depth (nvme9n1) 00:26:13.412 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.412 fio-3.35 00:26:13.412 Starting 11 threads 00:26:23.409 00:26:23.409 job0: (groupid=0, jobs=1): err= 0: pid=301357: Mon Oct 14 13:37:14 2024 00:26:23.409 write: IOPS=390, BW=97.6MiB/s (102MB/s)(986MiB/10100msec); 0 zone resets 00:26:23.409 slat (usec): min=14, max=110250, avg=1273.24, stdev=5190.03 00:26:23.409 clat (usec): min=1176, max=679961, avg=162544.54, stdev=127884.18 00:26:23.409 lat (usec): min=1226, max=680018, avg=163817.78, stdev=129041.22 00:26:23.409 clat percentiles (msec): 00:26:23.409 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 32], 20.00th=[ 53], 00:26:23.409 | 30.00th=[ 87], 40.00th=[ 113], 50.00th=[ 129], 60.00th=[ 159], 00:26:23.409 | 70.00th=[ 197], 80.00th=[ 253], 90.00th=[ 334], 95.00th=[ 426], 00:26:23.409 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 659], 99.95th=[ 676], 00:26:23.409 | 99.99th=[ 684] 00:26:23.409 bw ( KiB/s): min=31744, max=165888, per=10.56%, avg=99315.20, stdev=37946.26, samples=20 00:26:23.409 iops : min= 124, max= 648, avg=387.95, stdev=148.23, samples=20 00:26:23.409 lat (msec) : 2=0.20%, 4=0.86%, 10=2.64%, 20=4.13%, 50=10.68% 00:26:23.409 lat (msec) : 100=16.31%, 250=44.33%, 500=17.45%, 750=3.40% 00:26:23.409 cpu : usr=1.20%, sys=1.41%, ctx=2756, majf=0, minf=1 00:26:23.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:23.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.409 issued rwts: total=0,3943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.409 job1: (groupid=0, jobs=1): err= 0: pid=301369: Mon Oct 14 13:37:14 2024 00:26:23.409 write: IOPS=413, BW=103MiB/s (108MB/s)(1043MiB/10077msec); 0 zone resets 00:26:23.409 slat (usec): min=17, max=441266, avg=1882.43, stdev=8698.38 00:26:23.409 clat (usec): min=819, max=939320, avg=152611.45, stdev=120858.55 00:26:23.409 lat (usec): min=857, max=939797, avg=154493.88, stdev=122194.64 00:26:23.409 clat percentiles (msec): 00:26:23.409 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 33], 20.00th=[ 63], 00:26:23.409 | 30.00th=[ 81], 40.00th=[ 89], 50.00th=[ 126], 60.00th=[ 155], 00:26:23.409 | 70.00th=[ 197], 80.00th=[ 230], 90.00th=[ 305], 95.00th=[ 334], 00:26:23.409 | 99.00th=[ 584], 99.50th=[ 827], 99.90th=[ 919], 99.95th=[ 936], 00:26:23.409 | 99.99th=[ 936] 00:26:23.409 bw ( KiB/s): min=49152, max=200192, per=11.18%, avg=105133.25, stdev=48756.89, samples=20 00:26:23.409 iops : min= 192, max= 782, avg=410.65, stdev=190.48, samples=20 00:26:23.409 lat (usec) : 1000=0.10% 00:26:23.409 lat (msec) : 2=0.26%, 4=1.44%, 10=2.11%, 20=2.49%, 50=6.98% 00:26:23.409 lat (msec) : 100=29.52%, 250=41.73%, 500=13.88%, 750=0.86%, 1000=0.62% 00:26:23.409 cpu : usr=1.24%, sys=1.08%, ctx=1938, majf=0, minf=1 00:26:23.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:23.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.409 issued rwts: total=0,4170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.409 job2: (groupid=0, jobs=1): err= 0: pid=301370: Mon Oct 14 13:37:14 2024 00:26:23.409 write: IOPS=290, BW=72.6MiB/s (76.1MB/s)(729MiB/10038msec); 0 zone resets 00:26:23.409 slat (usec): min=20, max=165491, avg=2620.15, stdev=8698.20 00:26:23.409 clat (usec): min=663, max=706165, avg=217536.86, stdev=174679.00 00:26:23.409 lat (usec): min=721, max=706213, avg=220157.01, stdev=176858.40 00:26:23.409 clat percentiles (msec): 00:26:23.409 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 28], 20.00th=[ 41], 00:26:23.409 | 30.00th=[ 63], 40.00th=[ 142], 50.00th=[ 205], 60.00th=[ 243], 00:26:23.409 | 70.00th=[ 284], 80.00th=[ 363], 90.00th=[ 489], 95.00th=[ 558], 00:26:23.409 | 99.00th=[ 667], 99.50th=[ 693], 99.90th=[ 709], 99.95th=[ 709], 00:26:23.409 | 99.99th=[ 709] 00:26:23.409 bw ( KiB/s): min=22528, max=249344, per=7.76%, avg=73007.35, stdev=52552.30, samples=20 00:26:23.409 iops : min= 88, max= 974, avg=285.15, stdev=205.31, samples=20 00:26:23.409 lat (usec) : 750=0.17%, 1000=0.10% 00:26:23.409 lat (msec) : 2=0.21%, 4=1.72%, 10=1.48%, 20=3.57%, 50=19.35% 00:26:23.409 lat (msec) : 100=8.03%, 250=27.00%, 500=29.64%, 750=8.75% 00:26:23.409 cpu : usr=0.93%, sys=1.03%, ctx=1574, majf=0, minf=1 00:26:23.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:23.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.409 issued rwts: total=0,2915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.409 job3: (groupid=0, jobs=1): err= 0: pid=301371: Mon Oct 14 13:37:14 2024 00:26:23.409 write: IOPS=258, BW=64.7MiB/s (67.9MB/s)(653MiB/10082msec); 0 zone resets 00:26:23.409 slat (usec): min=14, max=109017, avg=2839.27, stdev=7545.02 00:26:23.409 clat (usec): min=652, max=646901, avg=244308.76, stdev=136586.08 00:26:23.409 lat (usec): min=690, max=646955, avg=247148.03, stdev=138164.28 00:26:23.409 clat percentiles (usec): 00:26:23.409 | 1.00th=[ 1237], 5.00th=[ 67634], 10.00th=[ 86508], 20.00th=[106431], 00:26:23.409 | 30.00th=[154141], 40.00th=[204473], 50.00th=[231736], 60.00th=[258999], 00:26:23.409 | 70.00th=[299893], 80.00th=[341836], 90.00th=[450888], 95.00th=[501220], 00:26:23.409 | 99.00th=[616563], 99.50th=[633340], 99.90th=[650118], 99.95th=[650118], 00:26:23.409 | 99.99th=[650118] 00:26:23.409 bw ( KiB/s): min=24576, max=165376, per=6.93%, avg=65201.85, stdev=31497.75, samples=20 00:26:23.409 iops : min= 96, max= 646, avg=254.65, stdev=123.05, samples=20 00:26:23.409 lat (usec) : 750=0.46%, 1000=0.46% 00:26:23.409 lat (msec) : 2=0.57%, 4=0.23%, 20=0.19%, 50=1.00%, 100=12.84% 00:26:23.409 lat (msec) : 250=39.77%, 500=39.39%, 750=5.10% 00:26:23.409 cpu : usr=0.81%, sys=0.84%, ctx=1193, majf=0, minf=1 00:26:23.409 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:23.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.409 issued rwts: total=0,2610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.409 job4: (groupid=0, jobs=1): err= 0: pid=301372: Mon Oct 14 13:37:14 2024 00:26:23.409 write: IOPS=296, BW=74.1MiB/s (77.7MB/s)(759MiB/10232msec); 0 zone resets 00:26:23.410 slat (usec): min=15, max=161015, avg=2642.58, stdev=7529.25 00:26:23.410 clat (usec): min=891, max=672951, avg=213081.30, stdev=127661.36 00:26:23.410 lat (usec): min=927, max=679066, avg=215723.88, stdev=129301.59 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 34], 20.00th=[ 108], 00:26:23.410 | 30.00th=[ 144], 40.00th=[ 176], 50.00th=[ 213], 60.00th=[ 236], 00:26:23.410 | 70.00th=[ 266], 80.00th=[ 300], 90.00th=[ 355], 95.00th=[ 472], 00:26:23.410 | 99.00th=[ 575], 99.50th=[ 609], 99.90th=[ 651], 99.95th=[ 659], 00:26:23.410 | 99.99th=[ 676] 00:26:23.410 bw ( KiB/s): min=32833, max=136192, per=8.08%, avg=76024.55, stdev=27555.31, samples=20 00:26:23.410 iops : min= 128, max= 532, avg=296.95, stdev=107.65, samples=20 00:26:23.410 lat (usec) : 1000=0.07% 00:26:23.410 lat (msec) : 2=0.33%, 4=1.85%, 10=2.27%, 20=2.14%, 50=5.47% 00:26:23.410 lat (msec) : 100=4.85%, 250=47.82%, 500=31.71%, 750=3.49% 00:26:23.410 cpu : usr=0.80%, sys=1.06%, ctx=1531, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,3034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job5: (groupid=0, jobs=1): err= 0: pid=301373: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=297, BW=74.4MiB/s (78.1MB/s)(762MiB/10233msec); 0 zone resets 00:26:23.410 slat (usec): min=14, max=232201, avg=2418.15, stdev=9293.42 00:26:23.410 clat (usec): min=999, max=633314, avg=211687.84, stdev=141248.69 00:26:23.410 lat (usec): min=1039, max=633381, avg=214105.99, stdev=142073.46 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 3], 5.00th=[ 25], 10.00th=[ 46], 20.00th=[ 83], 00:26:23.410 | 30.00th=[ 121], 40.00th=[ 169], 50.00th=[ 197], 60.00th=[ 215], 00:26:23.410 | 70.00th=[ 251], 80.00th=[ 309], 90.00th=[ 443], 95.00th=[ 502], 00:26:23.410 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 617], 99.95th=[ 634], 00:26:23.410 | 99.99th=[ 634] 00:26:23.410 bw ( KiB/s): min=30720, max=152576, per=8.12%, avg=76352.75, stdev=32453.64, samples=20 00:26:23.410 iops : min= 120, max= 596, avg=298.25, stdev=126.77, samples=20 00:26:23.410 lat (usec) : 1000=0.03% 00:26:23.410 lat (msec) : 2=0.92%, 4=0.79%, 10=1.12%, 20=1.48%, 50=6.70% 00:26:23.410 lat (msec) : 100=13.00%, 250=46.01%, 500=24.65%, 750=5.32% 00:26:23.410 cpu : usr=0.91%, sys=1.04%, ctx=1451, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job6: (groupid=0, jobs=1): err= 0: pid=301374: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=321, BW=80.4MiB/s (84.3MB/s)(821MiB/10207msec); 0 zone resets 00:26:23.410 slat (usec): min=16, max=58749, avg=1703.06, stdev=6015.82 00:26:23.410 clat (usec): min=743, max=634508, avg=197074.65, stdev=143722.52 00:26:23.410 lat (usec): min=786, max=644344, avg=198777.71, stdev=145436.80 00:26:23.410 clat percentiles (usec): 00:26:23.410 | 1.00th=[ 1221], 5.00th=[ 3720], 10.00th=[ 17957], 20.00th=[ 84411], 00:26:23.410 | 30.00th=[111674], 40.00th=[131597], 50.00th=[158335], 60.00th=[206570], 00:26:23.410 | 70.00th=[256902], 80.00th=[299893], 90.00th=[429917], 95.00th=[476054], 00:26:23.410 | 99.00th=[591397], 99.50th=[608175], 99.90th=[624952], 99.95th=[633340], 00:26:23.410 | 99.99th=[633340] 00:26:23.410 bw ( KiB/s): min=32768, max=157184, per=8.77%, avg=82460.80, stdev=40494.89, samples=20 00:26:23.410 iops : min= 128, max= 614, avg=322.10, stdev=158.16, samples=20 00:26:23.410 lat (usec) : 750=0.03%, 1000=0.55% 00:26:23.410 lat (msec) : 2=1.61%, 4=3.17%, 10=2.71%, 20=2.38%, 50=3.99% 00:26:23.410 lat (msec) : 100=11.66%, 250=42.75%, 500=27.44%, 750=3.71% 00:26:23.410 cpu : usr=0.92%, sys=1.21%, ctx=2294, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,3284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job7: (groupid=0, jobs=1): err= 0: pid=301375: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=284, BW=71.2MiB/s (74.6MB/s)(728MiB/10220msec); 0 zone resets 00:26:23.410 slat (usec): min=24, max=158337, avg=1422.46, stdev=6669.53 00:26:23.410 clat (usec): min=1152, max=761158, avg=223059.64, stdev=149992.30 00:26:23.410 lat (usec): min=1229, max=761205, avg=224482.10, stdev=151322.54 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 80], 00:26:23.410 | 30.00th=[ 116], 40.00th=[ 174], 50.00th=[ 213], 60.00th=[ 249], 00:26:23.410 | 70.00th=[ 284], 80.00th=[ 326], 90.00th=[ 422], 95.00th=[ 502], 00:26:23.410 | 99.00th=[ 718], 99.50th=[ 751], 99.90th=[ 760], 99.95th=[ 760], 00:26:23.410 | 99.99th=[ 760] 00:26:23.410 bw ( KiB/s): min=18432, max=183808, per=7.75%, avg=72876.35, stdev=35404.87, samples=20 00:26:23.410 iops : min= 72, max= 718, avg=284.65, stdev=138.30, samples=20 00:26:23.410 lat (msec) : 2=0.21%, 4=0.31%, 10=1.00%, 20=1.51%, 50=7.35% 00:26:23.410 lat (msec) : 100=15.67%, 250=34.02%, 500=34.64%, 750=4.98%, 1000=0.31% 00:26:23.410 cpu : usr=0.83%, sys=1.12%, ctx=2208, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,2910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job8: (groupid=0, jobs=1): err= 0: pid=301376: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=405, BW=101MiB/s (106MB/s)(1038MiB/10233msec); 0 zone resets 00:26:23.410 slat (usec): min=25, max=156227, avg=1785.12, stdev=5422.27 00:26:23.410 clat (msec): min=3, max=507, avg=155.75, stdev=103.22 00:26:23.410 lat (msec): min=3, max=507, avg=157.53, stdev=103.68 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 84], 00:26:23.410 | 30.00th=[ 100], 40.00th=[ 114], 50.00th=[ 127], 60.00th=[ 136], 00:26:23.410 | 70.00th=[ 167], 80.00th=[ 218], 90.00th=[ 317], 95.00th=[ 393], 00:26:23.410 | 99.00th=[ 485], 99.50th=[ 493], 99.90th=[ 506], 99.95th=[ 506], 00:26:23.410 | 99.99th=[ 506] 00:26:23.410 bw ( KiB/s): min=35911, max=260598, per=11.12%, avg=104604.65, stdev=54628.40, samples=20 00:26:23.410 iops : min= 140, max= 1017, avg=408.55, stdev=213.27, samples=20 00:26:23.410 lat (msec) : 4=0.10%, 10=0.41%, 20=0.36%, 50=8.77%, 100=21.78% 00:26:23.410 lat (msec) : 250=52.95%, 500=15.47%, 750=0.17% 00:26:23.410 cpu : usr=1.18%, sys=1.31%, ctx=1649, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,4151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job9: (groupid=0, jobs=1): err= 0: pid=301377: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=332, BW=83.2MiB/s (87.2MB/s)(839MiB/10082msec); 0 zone resets 00:26:23.410 slat (usec): min=14, max=96657, avg=2328.80, stdev=6424.91 00:26:23.410 clat (usec): min=793, max=638514, avg=189971.52, stdev=138220.40 00:26:23.410 lat (usec): min=854, max=645859, avg=192300.31, stdev=140169.74 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 42], 20.00th=[ 81], 00:26:23.410 | 30.00th=[ 108], 40.00th=[ 133], 50.00th=[ 155], 60.00th=[ 186], 00:26:23.410 | 70.00th=[ 215], 80.00th=[ 300], 90.00th=[ 401], 95.00th=[ 485], 00:26:23.410 | 99.00th=[ 592], 99.50th=[ 609], 99.90th=[ 634], 99.95th=[ 634], 00:26:23.410 | 99.99th=[ 642] 00:26:23.410 bw ( KiB/s): min=28672, max=216654, per=8.96%, avg=84227.90, stdev=49170.77, samples=20 00:26:23.410 iops : min= 112, max= 846, avg=329.00, stdev=192.03, samples=20 00:26:23.410 lat (usec) : 1000=0.12% 00:26:23.410 lat (msec) : 2=0.09%, 4=0.72%, 10=0.66%, 20=2.47%, 50=8.11% 00:26:23.410 lat (msec) : 100=14.64%, 250=49.76%, 500=18.96%, 750=4.47% 00:26:23.410 cpu : usr=0.93%, sys=1.07%, ctx=1648, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,3354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 job10: (groupid=0, jobs=1): err= 0: pid=301378: Mon Oct 14 13:37:14 2024 00:26:23.410 write: IOPS=408, BW=102MiB/s (107MB/s)(1045MiB/10234msec); 0 zone resets 00:26:23.410 slat (usec): min=24, max=49386, avg=2295.45, stdev=4836.19 00:26:23.410 clat (msec): min=3, max=588, avg=154.35, stdev=88.00 00:26:23.410 lat (msec): min=3, max=604, avg=156.64, stdev=89.01 00:26:23.410 clat percentiles (msec): 00:26:23.410 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 80], 00:26:23.410 | 30.00th=[ 85], 40.00th=[ 105], 50.00th=[ 136], 60.00th=[ 169], 00:26:23.410 | 70.00th=[ 203], 80.00th=[ 232], 90.00th=[ 268], 95.00th=[ 313], 00:26:23.410 | 99.00th=[ 397], 99.50th=[ 510], 99.90th=[ 567], 99.95th=[ 592], 00:26:23.410 | 99.99th=[ 592] 00:26:23.410 bw ( KiB/s): min=49152, max=292352, per=11.20%, avg=105359.75, stdev=58291.12, samples=20 00:26:23.410 iops : min= 192, max= 1142, avg=411.55, stdev=227.70, samples=20 00:26:23.410 lat (msec) : 4=0.05%, 10=0.17%, 20=0.43%, 50=4.33%, 100=32.45% 00:26:23.410 lat (msec) : 250=48.03%, 500=13.97%, 750=0.57% 00:26:23.410 cpu : usr=1.21%, sys=1.22%, ctx=1123, majf=0, minf=1 00:26:23.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:23.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.410 issued rwts: total=0,4179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.410 00:26:23.410 Run status group 0 (all jobs): 00:26:23.410 WRITE: bw=918MiB/s (963MB/s), 64.7MiB/s-103MiB/s (67.9MB/s-108MB/s), io=9399MiB (9856MB), run=10038-10234msec 00:26:23.410 00:26:23.410 Disk stats (read/write): 00:26:23.410 nvme0n1: ios=41/7703, merge=0/0, ticks=350/1222020, in_queue=1222370, util=100.00% 00:26:23.410 nvme10n1: ios=45/8174, merge=0/0, ticks=3882/1168582, in_queue=1172464, util=100.00% 00:26:23.410 nvme1n1: ios=40/5508, merge=0/0, ticks=3753/1215769, in_queue=1219522, util=100.00% 00:26:23.410 nvme2n1: ios=0/5011, merge=0/0, ticks=0/1222893, in_queue=1222893, util=97.85% 00:26:23.410 nvme3n1: ios=0/6030, merge=0/0, ticks=0/1240277, in_queue=1240277, util=97.98% 00:26:23.410 nvme4n1: ios=46/6055, merge=0/0, ticks=3807/1187712, in_queue=1191519, util=100.00% 00:26:23.410 nvme5n1: ios=0/6548, merge=0/0, ticks=0/1251502, in_queue=1251502, util=98.36% 00:26:23.411 nvme6n1: ios=44/5794, merge=0/0, ticks=2061/1258420, in_queue=1260481, util=100.00% 00:26:23.411 nvme7n1: ios=15/8261, merge=0/0, ticks=1106/1239354, in_queue=1240460, util=99.84% 00:26:23.411 nvme8n1: ios=0/6495, merge=0/0, ticks=0/1219290, in_queue=1219290, util=98.95% 00:26:23.411 nvme9n1: ios=0/8318, merge=0/0, ticks=0/1232542, in_queue=1232542, util=99.15% 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:23.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:23.411 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.411 13:37:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:23.411 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:23.411 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.668 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:23.925 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:23.925 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:23.925 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:23.925 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.926 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:24.183 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.183 13:37:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:24.440 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.440 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:24.698 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:24.698 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:24.698 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:24.955 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:24.955 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:24.956 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.956 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:25.213 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:25.213 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:25.213 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.214 13:37:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:25.214 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.214 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.471 rmmod nvme_tcp 00:26:25.471 rmmod nvme_fabrics 00:26:25.471 rmmod nvme_keyring 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 296369 ']' 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 296369 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 296369 ']' 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 296369 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 296369 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 296369' 00:26:25.471 killing process with pid 296369 00:26:25.471 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 296369 00:26:25.472 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 296369 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.044 13:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.947 00:26:27.947 real 1m1.287s 00:26:27.947 user 3m37.660s 00:26:27.947 sys 0m15.628s 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.947 ************************************ 00:26:27.947 END TEST nvmf_multiconnection 00:26:27.947 ************************************ 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:27.947 ************************************ 00:26:27.947 START TEST nvmf_initiator_timeout 00:26:27.947 ************************************ 00:26:27.947 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:28.206 * Looking for test storage... 00:26:28.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:28.206 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.207 --rc genhtml_branch_coverage=1 00:26:28.207 --rc genhtml_function_coverage=1 00:26:28.207 --rc genhtml_legend=1 00:26:28.207 --rc geninfo_all_blocks=1 00:26:28.207 --rc geninfo_unexecuted_blocks=1 00:26:28.207 00:26:28.207 ' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.207 --rc genhtml_branch_coverage=1 00:26:28.207 --rc genhtml_function_coverage=1 00:26:28.207 --rc genhtml_legend=1 00:26:28.207 --rc geninfo_all_blocks=1 00:26:28.207 --rc geninfo_unexecuted_blocks=1 00:26:28.207 00:26:28.207 ' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.207 --rc genhtml_branch_coverage=1 00:26:28.207 --rc genhtml_function_coverage=1 00:26:28.207 --rc genhtml_legend=1 00:26:28.207 --rc geninfo_all_blocks=1 00:26:28.207 --rc geninfo_unexecuted_blocks=1 00:26:28.207 00:26:28.207 ' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.207 --rc genhtml_branch_coverage=1 00:26:28.207 --rc genhtml_function_coverage=1 00:26:28.207 --rc genhtml_legend=1 00:26:28.207 --rc geninfo_all_blocks=1 00:26:28.207 --rc geninfo_unexecuted_blocks=1 00:26:28.207 00:26:28.207 ' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.207 13:37:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:30.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:30.741 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:30.741 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:30.741 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.741 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:26:30.742 00:26:30.742 --- 10.0.0.2 ping statistics --- 00:26:30.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.742 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:26:30.742 00:26:30.742 --- 10.0.0.1 ping statistics --- 00:26:30.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.742 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=304570 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 304570 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 304570 ']' 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 [2024-10-14 13:37:22.253895] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:26:30.742 [2024-10-14 13:37:22.253980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.742 [2024-10-14 13:37:22.324634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.742 [2024-10-14 13:37:22.374767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.742 [2024-10-14 13:37:22.374821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.742 [2024-10-14 13:37:22.374849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.742 [2024-10-14 13:37:22.374860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.742 [2024-10-14 13:37:22.374869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.742 [2024-10-14 13:37:22.376600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.742 [2024-10-14 13:37:22.376635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.742 [2024-10-14 13:37:22.376695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.742 [2024-10-14 13:37:22.376698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 Malloc0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 Delay0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 [2024-10-14 13:37:22.573000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.742 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.000 [2024-10-14 13:37:22.601322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.000 13:37:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:31.567 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:31.567 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:31.567 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:31.567 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:31.567 13:37:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=304992 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:33.464 13:37:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:33.722 [global] 00:26:33.722 thread=1 00:26:33.722 invalidate=1 00:26:33.722 rw=write 00:26:33.722 time_based=1 00:26:33.722 runtime=60 00:26:33.722 ioengine=libaio 00:26:33.722 direct=1 00:26:33.722 bs=4096 00:26:33.722 iodepth=1 00:26:33.722 norandommap=0 00:26:33.722 numjobs=1 00:26:33.722 00:26:33.722 verify_dump=1 00:26:33.722 verify_backlog=512 00:26:33.722 verify_state_save=0 00:26:33.722 do_verify=1 00:26:33.722 verify=crc32c-intel 00:26:33.722 [job0] 00:26:33.722 filename=/dev/nvme0n1 00:26:33.722 Could not set queue depth (nvme0n1) 00:26:33.722 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:33.722 fio-3.35 00:26:33.722 Starting 1 thread 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.003 true 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:37.003 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.004 true 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.004 true 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.004 true 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.004 13:37:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.533 true 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.533 true 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.533 true 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.533 true 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:39.533 13:37:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 304992 00:27:35.743 00:27:35.743 job0: (groupid=0, jobs=1): err= 0: pid=305065: Mon Oct 14 13:38:25 2024 00:27:35.743 read: IOPS=224, BW=897KiB/s (918kB/s)(52.6MiB/60003msec) 00:27:35.743 slat (nsec): min=4519, max=59796, avg=11056.54, stdev=6409.57 00:27:35.743 clat (usec): min=214, max=41198k, avg=4218.80, stdev=355217.22 00:27:35.743 lat (usec): min=219, max=41198k, avg=4229.86, stdev=355217.27 00:27:35.743 clat percentiles (usec): 00:27:35.743 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:27:35.743 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:27:35.743 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 347], 00:27:35.743 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:35.743 | 99.99th=[42206] 00:27:35.743 write: IOPS=230, BW=922KiB/s (944kB/s)(54.0MiB/60003msec); 0 zone resets 00:27:35.743 slat (nsec): min=5940, max=69589, avg=12830.14, stdev=6928.00 00:27:35.743 clat (usec): min=163, max=471, avg=204.23, stdev=25.82 00:27:35.744 lat (usec): min=170, max=485, avg=217.06, stdev=30.50 00:27:35.744 clat percentiles (usec): 00:27:35.744 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:27:35.744 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:27:35.744 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 249], 00:27:35.744 | 99.00th=[ 306], 99.50th=[ 334], 99.90th=[ 408], 99.95th=[ 416], 00:27:35.744 | 99.99th=[ 429] 00:27:35.744 bw ( KiB/s): min= 640, max= 9328, per=100.00%, avg=5764.89, stdev=3008.18, samples=18 00:27:35.744 iops : min= 160, max= 2332, avg=1441.22, stdev=752.05, samples=18 00:27:35.744 lat (usec) : 250=62.13%, 500=36.56%, 750=0.22%, 1000=0.01% 00:27:35.744 lat (msec) : 2=0.01%, 50=1.07%, >=2000=0.01% 00:27:35.744 cpu : usr=0.41%, sys=0.68%, ctx=27279, majf=0, minf=1 00:27:35.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.744 issued rwts: total=13454,13824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:35.744 00:27:35.744 Run status group 0 (all jobs): 00:27:35.744 READ: bw=897KiB/s (918kB/s), 897KiB/s-897KiB/s (918kB/s-918kB/s), io=52.6MiB (55.1MB), run=60003-60003msec 00:27:35.744 WRITE: bw=922KiB/s (944kB/s), 922KiB/s-922KiB/s (944kB/s-944kB/s), io=54.0MiB (56.6MB), run=60003-60003msec 00:27:35.744 00:27:35.744 Disk stats (read/write): 00:27:35.744 nvme0n1: ios=13550/13824, merge=0/0, ticks=15348/2648, in_queue=17996, util=99.54% 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:35.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:35.744 nvmf hotplug test: fio successful as expected 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.744 rmmod nvme_tcp 00:27:35.744 rmmod nvme_fabrics 00:27:35.744 rmmod nvme_keyring 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 304570 ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 304570 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 304570 ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 304570 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 304570 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 304570' 00:27:35.744 killing process with pid 304570 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 304570 00:27:35.744 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 304570 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.744 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.319 00:27:36.319 real 1m8.373s 00:27:36.319 user 4m10.875s 00:27:36.319 sys 0m7.186s 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:36.319 ************************************ 00:27:36.319 END TEST nvmf_initiator_timeout 00:27:36.319 ************************************ 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:36.319 13:38:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.579 13:38:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.484 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.484 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.485 13:38:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:38.745 ************************************ 00:27:38.745 START TEST nvmf_perf_adq 00:27:38.745 ************************************ 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:38.745 * Looking for test storage... 00:27:38.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.745 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.746 --rc genhtml_branch_coverage=1 00:27:38.746 --rc genhtml_function_coverage=1 00:27:38.746 --rc genhtml_legend=1 00:27:38.746 --rc geninfo_all_blocks=1 00:27:38.746 --rc geninfo_unexecuted_blocks=1 00:27:38.746 00:27:38.746 ' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.746 --rc genhtml_branch_coverage=1 00:27:38.746 --rc genhtml_function_coverage=1 00:27:38.746 --rc genhtml_legend=1 00:27:38.746 --rc geninfo_all_blocks=1 00:27:38.746 --rc geninfo_unexecuted_blocks=1 00:27:38.746 00:27:38.746 ' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.746 --rc genhtml_branch_coverage=1 00:27:38.746 --rc genhtml_function_coverage=1 00:27:38.746 --rc genhtml_legend=1 00:27:38.746 --rc geninfo_all_blocks=1 00:27:38.746 --rc geninfo_unexecuted_blocks=1 00:27:38.746 00:27:38.746 ' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.746 --rc genhtml_branch_coverage=1 00:27:38.746 --rc genhtml_function_coverage=1 00:27:38.746 --rc genhtml_legend=1 00:27:38.746 --rc geninfo_all_blocks=1 00:27:38.746 --rc geninfo_unexecuted_blocks=1 00:27:38.746 00:27:38.746 ' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.746 13:38:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.280 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:41.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:41.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:41.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:41.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:41.281 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:41.540 13:38:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:45.754 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:51.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:51.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:51.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:51.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.030 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.031 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.031 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.031 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.031 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.031 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:27:51.031 00:27:51.031 --- 10.0.0.2 ping statistics --- 00:27:51.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.031 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:27:51.031 00:27:51.031 --- 10.0.0.1 ping statistics --- 00:27:51.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.031 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=316842 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 316842 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 316842 ']' 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 [2024-10-14 13:38:42.130927] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:27:51.031 [2024-10-14 13:38:42.131025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.031 [2024-10-14 13:38:42.205147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.031 [2024-10-14 13:38:42.259316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.031 [2024-10-14 13:38:42.259377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.031 [2024-10-14 13:38:42.259407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.031 [2024-10-14 13:38:42.259420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.031 [2024-10-14 13:38:42.259431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.031 [2024-10-14 13:38:42.261168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.031 [2024-10-14 13:38:42.261195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.031 [2024-10-14 13:38:42.263150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.031 [2024-10-14 13:38:42.263163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 [2024-10-14 13:38:42.557812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 Malloc1 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.031 [2024-10-14 13:38:42.625167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=316871 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:51.031 13:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:52.929 "tick_rate": 2700000000, 00:27:52.929 "poll_groups": [ 00:27:52.929 { 00:27:52.929 "name": "nvmf_tgt_poll_group_000", 00:27:52.929 "admin_qpairs": 1, 00:27:52.929 "io_qpairs": 1, 00:27:52.929 "current_admin_qpairs": 1, 00:27:52.929 "current_io_qpairs": 1, 00:27:52.929 "pending_bdev_io": 0, 00:27:52.929 "completed_nvme_io": 19918, 00:27:52.929 "transports": [ 00:27:52.929 { 00:27:52.929 "trtype": "TCP" 00:27:52.929 } 00:27:52.929 ] 00:27:52.929 }, 00:27:52.929 { 00:27:52.929 "name": "nvmf_tgt_poll_group_001", 00:27:52.929 "admin_qpairs": 0, 00:27:52.929 "io_qpairs": 1, 00:27:52.929 "current_admin_qpairs": 0, 00:27:52.929 "current_io_qpairs": 1, 00:27:52.929 "pending_bdev_io": 0, 00:27:52.929 "completed_nvme_io": 19328, 00:27:52.929 "transports": [ 00:27:52.929 { 00:27:52.929 "trtype": "TCP" 00:27:52.929 } 00:27:52.929 ] 00:27:52.929 }, 00:27:52.929 { 00:27:52.929 "name": "nvmf_tgt_poll_group_002", 00:27:52.929 "admin_qpairs": 0, 00:27:52.929 "io_qpairs": 1, 00:27:52.929 "current_admin_qpairs": 0, 00:27:52.929 "current_io_qpairs": 1, 00:27:52.929 "pending_bdev_io": 0, 00:27:52.929 "completed_nvme_io": 19164, 00:27:52.929 "transports": [ 00:27:52.929 { 00:27:52.929 "trtype": "TCP" 00:27:52.929 } 00:27:52.929 ] 00:27:52.929 }, 00:27:52.929 { 00:27:52.929 "name": "nvmf_tgt_poll_group_003", 00:27:52.929 "admin_qpairs": 0, 00:27:52.929 "io_qpairs": 1, 00:27:52.929 "current_admin_qpairs": 0, 00:27:52.929 "current_io_qpairs": 1, 00:27:52.929 "pending_bdev_io": 0, 00:27:52.929 "completed_nvme_io": 19883, 00:27:52.929 "transports": [ 00:27:52.929 { 00:27:52.929 "trtype": "TCP" 00:27:52.929 } 00:27:52.929 ] 00:27:52.929 } 00:27:52.929 ] 00:27:52.929 }' 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:52.929 13:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 316871 00:28:01.037 Initializing NVMe Controllers 00:28:01.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:01.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:01.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:01.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:01.037 Initialization complete. Launching workers. 00:28:01.037 ======================================================== 00:28:01.037 Latency(us) 00:28:01.037 Device Information : IOPS MiB/s Average min max 00:28:01.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10406.90 40.65 6151.48 2126.84 10264.12 00:28:01.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10245.10 40.02 6247.94 2202.23 10653.56 00:28:01.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10094.30 39.43 6340.15 2486.72 9982.52 00:28:01.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10452.60 40.83 6122.96 1783.20 9766.86 00:28:01.037 ======================================================== 00:28:01.037 Total : 41198.89 160.93 6214.45 1783.20 10653.56 00:28:01.037 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.037 rmmod nvme_tcp 00:28:01.037 rmmod nvme_fabrics 00:28:01.037 rmmod nvme_keyring 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 316842 ']' 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 316842 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 316842 ']' 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 316842 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 316842 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 316842' 00:28:01.037 killing process with pid 316842 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 316842 00:28:01.037 13:38:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 316842 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.296 13:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.833 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.833 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:03.833 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:03.833 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:04.091 13:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:06.632 13:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:11.912 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.913 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:28:11.913 00:28:11.913 --- 10.0.0.2 ping statistics --- 00:28:11.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.913 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:28:11.914 00:28:11.914 --- 10.0.0.1 ping statistics --- 00:28:11.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.914 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:11.914 net.core.busy_poll = 1 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:11.914 net.core.busy_read = 1 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:11.914 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=319741 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 319741 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 319741 ']' 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.172 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.172 [2024-10-14 13:39:03.903535] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:12.173 [2024-10-14 13:39:03.903618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.173 [2024-10-14 13:39:03.969623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.173 [2024-10-14 13:39:04.017376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.173 [2024-10-14 13:39:04.017432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.173 [2024-10-14 13:39:04.017461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.173 [2024-10-14 13:39:04.017472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.173 [2024-10-14 13:39:04.017481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.173 [2024-10-14 13:39:04.018958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.173 [2024-10-14 13:39:04.019025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.173 [2024-10-14 13:39:04.019093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.173 [2024-10-14 13:39:04.019097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.431 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 [2024-10-14 13:39:04.305541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 Malloc1 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.689 [2024-10-14 13:39:04.376366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=319773 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:12.689 13:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:14.706 "tick_rate": 2700000000, 00:28:14.706 "poll_groups": [ 00:28:14.706 { 00:28:14.706 "name": "nvmf_tgt_poll_group_000", 00:28:14.706 "admin_qpairs": 1, 00:28:14.706 "io_qpairs": 3, 00:28:14.706 "current_admin_qpairs": 1, 00:28:14.706 "current_io_qpairs": 3, 00:28:14.706 "pending_bdev_io": 0, 00:28:14.706 "completed_nvme_io": 25920, 00:28:14.706 "transports": [ 00:28:14.706 { 00:28:14.706 "trtype": "TCP" 00:28:14.706 } 00:28:14.706 ] 00:28:14.706 }, 00:28:14.706 { 00:28:14.706 "name": "nvmf_tgt_poll_group_001", 00:28:14.706 "admin_qpairs": 0, 00:28:14.706 "io_qpairs": 1, 00:28:14.706 "current_admin_qpairs": 0, 00:28:14.706 "current_io_qpairs": 1, 00:28:14.706 "pending_bdev_io": 0, 00:28:14.706 "completed_nvme_io": 25891, 00:28:14.706 "transports": [ 00:28:14.706 { 00:28:14.706 "trtype": "TCP" 00:28:14.706 } 00:28:14.706 ] 00:28:14.706 }, 00:28:14.706 { 00:28:14.706 "name": "nvmf_tgt_poll_group_002", 00:28:14.706 "admin_qpairs": 0, 00:28:14.706 "io_qpairs": 0, 00:28:14.706 "current_admin_qpairs": 0, 00:28:14.706 "current_io_qpairs": 0, 00:28:14.706 "pending_bdev_io": 0, 00:28:14.706 "completed_nvme_io": 0, 00:28:14.706 "transports": [ 00:28:14.706 { 00:28:14.706 "trtype": "TCP" 00:28:14.706 } 00:28:14.706 ] 00:28:14.706 }, 00:28:14.706 { 00:28:14.706 "name": "nvmf_tgt_poll_group_003", 00:28:14.706 "admin_qpairs": 0, 00:28:14.706 "io_qpairs": 0, 00:28:14.706 "current_admin_qpairs": 0, 00:28:14.706 "current_io_qpairs": 0, 00:28:14.706 "pending_bdev_io": 0, 00:28:14.706 "completed_nvme_io": 0, 00:28:14.706 "transports": [ 00:28:14.706 { 00:28:14.706 "trtype": "TCP" 00:28:14.706 } 00:28:14.706 ] 00:28:14.706 } 00:28:14.706 ] 00:28:14.706 }' 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:14.706 13:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 319773 00:28:22.942 Initializing NVMe Controllers 00:28:22.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:22.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:22.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:22.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:22.942 Initialization complete. Launching workers. 00:28:22.942 ======================================================== 00:28:22.942 Latency(us) 00:28:22.942 Device Information : IOPS MiB/s Average min max 00:28:22.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13416.90 52.41 4769.95 2407.88 6698.58 00:28:22.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4724.20 18.45 13550.87 1839.47 60927.30 00:28:22.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4458.00 17.41 14403.59 1959.79 62126.43 00:28:22.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4125.40 16.11 15528.79 1756.21 63855.20 00:28:22.942 ======================================================== 00:28:22.942 Total : 26724.50 104.39 9590.03 1756.21 63855.20 00:28:22.942 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.942 rmmod nvme_tcp 00:28:22.942 rmmod nvme_fabrics 00:28:22.942 rmmod nvme_keyring 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 319741 ']' 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 319741 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 319741 ']' 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 319741 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 319741 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 319741' 00:28:22.942 killing process with pid 319741 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 319741 00:28:22.942 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 319741 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.201 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.489 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.489 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:26.489 00:28:26.489 real 0m47.567s 00:28:26.489 user 2m39.097s 00:28:26.489 sys 0m11.235s 00:28:26.489 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.490 ************************************ 00:28:26.490 END TEST nvmf_perf_adq 00:28:26.490 ************************************ 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:26.490 ************************************ 00:28:26.490 START TEST nvmf_shutdown 00:28:26.490 ************************************ 00:28:26.490 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:26.490 * Looking for test storage... 00:28:26.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.490 --rc genhtml_branch_coverage=1 00:28:26.490 --rc genhtml_function_coverage=1 00:28:26.490 --rc genhtml_legend=1 00:28:26.490 --rc geninfo_all_blocks=1 00:28:26.490 --rc geninfo_unexecuted_blocks=1 00:28:26.490 00:28:26.490 ' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.490 --rc genhtml_branch_coverage=1 00:28:26.490 --rc genhtml_function_coverage=1 00:28:26.490 --rc genhtml_legend=1 00:28:26.490 --rc geninfo_all_blocks=1 00:28:26.490 --rc geninfo_unexecuted_blocks=1 00:28:26.490 00:28:26.490 ' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.490 --rc genhtml_branch_coverage=1 00:28:26.490 --rc genhtml_function_coverage=1 00:28:26.490 --rc genhtml_legend=1 00:28:26.490 --rc geninfo_all_blocks=1 00:28:26.490 --rc geninfo_unexecuted_blocks=1 00:28:26.490 00:28:26.490 ' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.490 --rc genhtml_branch_coverage=1 00:28:26.490 --rc genhtml_function_coverage=1 00:28:26.490 --rc genhtml_legend=1 00:28:26.490 --rc geninfo_all_blocks=1 00:28:26.490 --rc geninfo_unexecuted_blocks=1 00:28:26.490 00:28:26.490 ' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.490 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.491 ************************************ 00:28:26.491 START TEST nvmf_shutdown_tc1 00:28:26.491 ************************************ 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.491 13:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:29.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.028 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:29.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:29.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:29.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:29.029 00:28:29.029 --- 10.0.0.2 ping statistics --- 00:28:29.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.029 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:29.029 00:28:29.029 --- 10.0.0.1 ping statistics --- 00:28:29.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.029 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=323686 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 323686 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 323686 ']' 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.029 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.029 [2024-10-14 13:39:20.692361] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:29.029 [2024-10-14 13:39:20.692432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.029 [2024-10-14 13:39:20.755753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.029 [2024-10-14 13:39:20.798978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.029 [2024-10-14 13:39:20.799036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.029 [2024-10-14 13:39:20.799060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.029 [2024-10-14 13:39:20.799070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.029 [2024-10-14 13:39:20.799080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.029 [2024-10-14 13:39:20.800608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.029 [2024-10-14 13:39:20.800673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.029 [2024-10-14 13:39:20.800780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.029 [2024-10-14 13:39:20.800787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.289 [2024-10-14 13:39:20.940495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.289 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.289 Malloc1 00:28:29.289 [2024-10-14 13:39:21.032273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.289 Malloc2 00:28:29.289 Malloc3 00:28:29.549 Malloc4 00:28:29.549 Malloc5 00:28:29.549 Malloc6 00:28:29.549 Malloc7 00:28:29.549 Malloc8 00:28:29.549 Malloc9 00:28:29.808 Malloc10 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=323771 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 323771 /var/tmp/bdevperf.sock 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 323771 ']' 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:29.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:29.808 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:29.809 { 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme$subsystem", 00:28:29.809 "trtype": "$TEST_TRANSPORT", 00:28:29.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "$NVMF_PORT", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.809 "hdgst": ${hdgst:-false}, 00:28:29.809 "ddgst": ${ddgst:-false} 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 } 00:28:29.809 EOF 00:28:29.809 )") 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:29.809 13:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme1", 00:28:29.809 "trtype": "tcp", 00:28:29.809 "traddr": "10.0.0.2", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "4420", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:29.809 "hdgst": false, 00:28:29.809 "ddgst": false 00:28:29.809 }, 00:28:29.809 "method": "bdev_nvme_attach_controller" 00:28:29.809 },{ 00:28:29.809 "params": { 00:28:29.809 "name": "Nvme2", 00:28:29.809 "trtype": "tcp", 00:28:29.809 "traddr": "10.0.0.2", 00:28:29.809 "adrfam": "ipv4", 00:28:29.809 "trsvcid": "4420", 00:28:29.809 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:29.809 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:29.809 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme3", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme4", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme5", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme6", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme7", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme8", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme9", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 },{ 00:28:29.810 "params": { 00:28:29.810 "name": "Nvme10", 00:28:29.810 "trtype": "tcp", 00:28:29.810 "traddr": "10.0.0.2", 00:28:29.810 "adrfam": "ipv4", 00:28:29.810 "trsvcid": "4420", 00:28:29.810 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:29.810 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:29.810 "hdgst": false, 00:28:29.810 "ddgst": false 00:28:29.810 }, 00:28:29.810 "method": "bdev_nvme_attach_controller" 00:28:29.810 }' 00:28:29.810 [2024-10-14 13:39:21.522797] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:29.810 [2024-10-14 13:39:21.522885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:29.810 [2024-10-14 13:39:21.587798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.810 [2024-10-14 13:39:21.634871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 323771 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:31.715 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:33.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 323771 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 323686 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.096 "hdgst": ${hdgst:-false}, 00:28:33.096 "ddgst": ${ddgst:-false} 00:28:33.096 }, 00:28:33.096 "method": "bdev_nvme_attach_controller" 00:28:33.096 } 00:28:33.096 EOF 00:28:33.096 )") 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.096 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.096 { 00:28:33.096 "params": { 00:28:33.096 "name": "Nvme$subsystem", 00:28:33.096 "trtype": "$TEST_TRANSPORT", 00:28:33.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.096 "adrfam": "ipv4", 00:28:33.096 "trsvcid": "$NVMF_PORT", 00:28:33.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.097 "hdgst": ${hdgst:-false}, 00:28:33.097 "ddgst": ${ddgst:-false} 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 } 00:28:33.097 EOF 00:28:33.097 )") 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.097 { 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme$subsystem", 00:28:33.097 "trtype": "$TEST_TRANSPORT", 00:28:33.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "$NVMF_PORT", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.097 "hdgst": ${hdgst:-false}, 00:28:33.097 "ddgst": ${ddgst:-false} 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 } 00:28:33.097 EOF 00:28:33.097 )") 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.097 { 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme$subsystem", 00:28:33.097 "trtype": "$TEST_TRANSPORT", 00:28:33.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "$NVMF_PORT", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.097 "hdgst": ${hdgst:-false}, 00:28:33.097 "ddgst": ${ddgst:-false} 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 } 00:28:33.097 EOF 00:28:33.097 )") 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.097 { 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme$subsystem", 00:28:33.097 "trtype": "$TEST_TRANSPORT", 00:28:33.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "$NVMF_PORT", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.097 "hdgst": ${hdgst:-false}, 00:28:33.097 "ddgst": ${ddgst:-false} 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 } 00:28:33.097 EOF 00:28:33.097 )") 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:33.097 13:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme1", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme2", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme3", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme4", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme5", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme6", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme7", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme8", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme9", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 },{ 00:28:33.097 "params": { 00:28:33.097 "name": "Nvme10", 00:28:33.097 "trtype": "tcp", 00:28:33.097 "traddr": "10.0.0.2", 00:28:33.097 "adrfam": "ipv4", 00:28:33.097 "trsvcid": "4420", 00:28:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:33.097 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:33.097 "hdgst": false, 00:28:33.097 "ddgst": false 00:28:33.097 }, 00:28:33.097 "method": "bdev_nvme_attach_controller" 00:28:33.097 }' 00:28:33.097 [2024-10-14 13:39:24.599529] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:33.097 [2024-10-14 13:39:24.599606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324189 ] 00:28:33.097 [2024-10-14 13:39:24.664744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.097 [2024-10-14 13:39:24.714029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.473 Running I/O for 1 seconds... 00:28:35.668 1443.00 IOPS, 90.19 MiB/s 00:28:35.668 Latency(us) 00:28:35.668 [2024-10-14T11:39:27.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme1n1 : 1.12 189.80 11.86 0.00 0.00 316683.38 8543.95 260978.92 00:28:35.668 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme2n1 : 1.18 216.16 13.51 0.00 0.00 287268.03 35340.89 310689.19 00:28:35.668 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme3n1 : 1.13 188.40 11.78 0.00 0.00 308846.52 24078.41 329330.54 00:28:35.668 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme4n1 : 1.18 221.59 13.85 0.00 0.00 265520.59 32039.82 302921.96 00:28:35.668 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme5n1 : 1.13 173.01 10.81 0.00 0.00 340546.70 5437.06 327777.09 00:28:35.668 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme6n1 : 1.14 168.13 10.51 0.00 0.00 346442.46 22622.06 337097.77 00:28:35.668 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme7n1 : 1.19 214.26 13.39 0.00 0.00 268584.39 17282.09 318456.41 00:28:35.668 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme8n1 : 1.19 218.02 13.63 0.00 0.00 259312.42 1614.13 285834.05 00:28:35.668 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme9n1 : 1.15 167.09 10.44 0.00 0.00 331346.17 52817.16 329330.54 00:28:35.668 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:35.668 Verification LBA range: start 0x0 length 0x400 00:28:35.668 Nvme10n1 : 1.20 212.92 13.31 0.00 0.00 257417.77 6359.42 354185.67 00:28:35.668 [2024-10-14T11:39:27.521Z] =================================================================================================================== 00:28:35.668 [2024-10-14T11:39:27.521Z] Total : 1969.39 123.09 0.00 0.00 294134.95 1614.13 354185.67 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:35.668 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.669 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.669 rmmod nvme_tcp 00:28:35.669 rmmod nvme_fabrics 00:28:35.669 rmmod nvme_keyring 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 323686 ']' 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 323686 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 323686 ']' 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 323686 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323686 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323686' 00:28:35.927 killing process with pid 323686 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 323686 00:28:35.927 13:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 323686 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.193 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.735 00:28:38.735 real 0m11.911s 00:28:38.735 user 0m33.614s 00:28:38.735 sys 0m3.319s 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.735 ************************************ 00:28:38.735 END TEST nvmf_shutdown_tc1 00:28:38.735 ************************************ 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:38.735 ************************************ 00:28:38.735 START TEST nvmf_shutdown_tc2 00:28:38.735 ************************************ 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.735 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:28:38.736 00:28:38.736 --- 10.0.0.2 ping statistics --- 00:28:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.736 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:28:38.736 00:28:38.736 --- 10.0.0.1 ping statistics --- 00:28:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.736 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.736 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=324947 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 324947 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 324947 ']' 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:38.737 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.737 [2024-10-14 13:39:30.358389] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:38.737 [2024-10-14 13:39:30.358506] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.737 [2024-10-14 13:39:30.427753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.737 [2024-10-14 13:39:30.479264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.737 [2024-10-14 13:39:30.479327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.737 [2024-10-14 13:39:30.479341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.737 [2024-10-14 13:39:30.479352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.737 [2024-10-14 13:39:30.479361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.737 [2024-10-14 13:39:30.481051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.737 [2024-10-14 13:39:30.481112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.737 [2024-10-14 13:39:30.481139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:38.737 [2024-10-14 13:39:30.481149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.997 [2024-10-14 13:39:30.630299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.997 13:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.997 Malloc1 00:28:38.997 [2024-10-14 13:39:30.716099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.997 Malloc2 00:28:38.997 Malloc3 00:28:38.997 Malloc4 00:28:39.256 Malloc5 00:28:39.256 Malloc6 00:28:39.256 Malloc7 00:28:39.256 Malloc8 00:28:39.256 Malloc9 00:28:39.515 Malloc10 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=325122 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 325122 /var/tmp/bdevperf.sock 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 325122 ']' 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:28:39.515 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:39.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:39.516 { 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme$subsystem", 00:28:39.516 "trtype": "$TEST_TRANSPORT", 00:28:39.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "$NVMF_PORT", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.516 "hdgst": ${hdgst:-false}, 00:28:39.516 "ddgst": ${ddgst:-false} 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 } 00:28:39.516 EOF 00:28:39.516 )") 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:28:39.516 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme1", 00:28:39.516 "trtype": "tcp", 00:28:39.516 "traddr": "10.0.0.2", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "4420", 00:28:39.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:39.516 "hdgst": false, 00:28:39.516 "ddgst": false 00:28:39.516 }, 00:28:39.516 "method": "bdev_nvme_attach_controller" 00:28:39.516 },{ 00:28:39.516 "params": { 00:28:39.516 "name": "Nvme2", 00:28:39.516 "trtype": "tcp", 00:28:39.516 "traddr": "10.0.0.2", 00:28:39.516 "adrfam": "ipv4", 00:28:39.516 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme3", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme4", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme5", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme6", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme7", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme8", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme9", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 },{ 00:28:39.517 "params": { 00:28:39.517 "name": "Nvme10", 00:28:39.517 "trtype": "tcp", 00:28:39.517 "traddr": "10.0.0.2", 00:28:39.517 "adrfam": "ipv4", 00:28:39.517 "trsvcid": "4420", 00:28:39.517 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:39.517 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:39.517 "hdgst": false, 00:28:39.517 "ddgst": false 00:28:39.517 }, 00:28:39.517 "method": "bdev_nvme_attach_controller" 00:28:39.517 }' 00:28:39.517 [2024-10-14 13:39:31.216346] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:39.517 [2024-10-14 13:39:31.216451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325122 ] 00:28:39.517 [2024-10-14 13:39:31.281872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.517 [2024-10-14 13:39:31.329317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.419 Running I/O for 10 seconds... 00:28:41.419 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.419 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:41.419 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:41.419 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.419 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.677 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.677 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:41.677 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:41.678 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=151 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 151 -ge 100 ']' 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 325122 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 325122 ']' 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 325122 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325122 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325122' 00:28:41.937 killing process with pid 325122 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 325122 00:28:41.937 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 325122 00:28:41.937 Received shutdown signal, test time was about 0.901342 seconds 00:28:41.937 00:28:41.937 Latency(us) 00:28:41.937 [2024-10-14T11:39:33.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme1n1 : 0.90 285.80 17.86 0.00 0.00 219714.18 19418.07 243891.01 00:28:41.937 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme2n1 : 0.88 218.12 13.63 0.00 0.00 283742.56 36894.34 237677.23 00:28:41.937 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme3n1 : 0.90 284.29 17.77 0.00 0.00 213172.34 18155.90 256318.58 00:28:41.937 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme4n1 : 0.86 229.89 14.37 0.00 0.00 254913.05 2512.21 250104.79 00:28:41.937 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme5n1 : 0.88 217.48 13.59 0.00 0.00 266214.72 20388.98 256318.58 00:28:41.937 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme6n1 : 0.86 223.34 13.96 0.00 0.00 252248.62 18932.62 254765.13 00:28:41.937 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme7n1 : 0.87 220.89 13.81 0.00 0.00 249373.84 20680.25 253211.69 00:28:41.937 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme8n1 : 0.88 223.81 13.99 0.00 0.00 239757.02 3058.35 253211.69 00:28:41.937 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme9n1 : 0.89 214.92 13.43 0.00 0.00 244834.10 22330.79 267192.70 00:28:41.937 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:41.937 Verification LBA range: start 0x0 length 0x400 00:28:41.937 Nvme10n1 : 0.89 215.17 13.45 0.00 0.00 239470.11 22039.51 282727.16 00:28:41.937 [2024-10-14T11:39:33.790Z] =================================================================================================================== 00:28:41.937 [2024-10-14T11:39:33.790Z] Total : 2333.71 145.86 0.00 0.00 244491.45 2512.21 282727.16 00:28:42.196 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 324947 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.131 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.131 rmmod nvme_tcp 00:28:43.389 rmmod nvme_fabrics 00:28:43.389 rmmod nvme_keyring 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 324947 ']' 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 324947 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 324947 ']' 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 324947 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 324947 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 324947' 00:28:43.389 killing process with pid 324947 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 324947 00:28:43.389 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 324947 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:43.647 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:28:43.907 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.907 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.907 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.907 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.907 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.817 00:28:45.817 real 0m7.423s 00:28:45.817 user 0m22.407s 00:28:45.817 sys 0m1.477s 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.817 ************************************ 00:28:45.817 END TEST nvmf_shutdown_tc2 00:28:45.817 ************************************ 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:45.817 ************************************ 00:28:45.817 START TEST nvmf_shutdown_tc3 00:28:45.817 ************************************ 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.817 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.818 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.818 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.818 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:28:46.078 00:28:46.078 --- 10.0.0.2 ping statistics --- 00:28:46.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.078 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:28:46.078 00:28:46.078 --- 10.0.0.1 ping statistics --- 00:28:46.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.078 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=326028 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 326028 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 326028 ']' 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.078 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.078 [2024-10-14 13:39:37.843783] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:46.078 [2024-10-14 13:39:37.843883] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.078 [2024-10-14 13:39:37.911339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.338 [2024-10-14 13:39:37.962807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.338 [2024-10-14 13:39:37.962867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.338 [2024-10-14 13:39:37.962881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.338 [2024-10-14 13:39:37.962892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.338 [2024-10-14 13:39:37.962901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.338 [2024-10-14 13:39:37.964564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.338 [2024-10-14 13:39:37.964630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.338 [2024-10-14 13:39:37.964695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.338 [2024-10-14 13:39:37.964697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.338 [2024-10-14 13:39:38.113871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.338 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.339 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.339 Malloc1 00:28:46.597 [2024-10-14 13:39:38.208550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.597 Malloc2 00:28:46.597 Malloc3 00:28:46.597 Malloc4 00:28:46.597 Malloc5 00:28:46.597 Malloc6 00:28:46.856 Malloc7 00:28:46.856 Malloc8 00:28:46.856 Malloc9 00:28:46.856 Malloc10 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=326091 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 326091 /var/tmp/bdevperf.sock 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 326091 ']' 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:46.856 { 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme$subsystem", 00:28:46.856 "trtype": "$TEST_TRANSPORT", 00:28:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "$NVMF_PORT", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.856 "hdgst": ${hdgst:-false}, 00:28:46.856 "ddgst": ${ddgst:-false} 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.856 } 00:28:46.856 EOF 00:28:46.856 )") 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:28:46.856 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:46.856 "params": { 00:28:46.856 "name": "Nvme1", 00:28:46.856 "trtype": "tcp", 00:28:46.856 "traddr": "10.0.0.2", 00:28:46.856 "adrfam": "ipv4", 00:28:46.856 "trsvcid": "4420", 00:28:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.856 "hdgst": false, 00:28:46.856 "ddgst": false 00:28:46.856 }, 00:28:46.856 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme2", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme3", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme4", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme5", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme6", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme7", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme8", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme9", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 },{ 00:28:46.857 "params": { 00:28:46.857 "name": "Nvme10", 00:28:46.857 "trtype": "tcp", 00:28:46.857 "traddr": "10.0.0.2", 00:28:46.857 "adrfam": "ipv4", 00:28:46.857 "trsvcid": "4420", 00:28:46.857 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:46.857 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:46.857 "hdgst": false, 00:28:46.857 "ddgst": false 00:28:46.857 }, 00:28:46.857 "method": "bdev_nvme_attach_controller" 00:28:46.857 }' 00:28:47.115 [2024-10-14 13:39:38.718619] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:47.115 [2024-10-14 13:39:38.718705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326091 ] 00:28:47.115 [2024-10-14 13:39:38.785531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.115 [2024-10-14 13:39:38.832806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.491 Running I/O for 10 seconds... 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 326028 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 326028 ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 326028 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 326028 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 326028' 00:28:49.071 killing process with pid 326028 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 326028 00:28:49.071 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 326028 00:28:49.071 [2024-10-14 13:39:40.869272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.071 [2024-10-14 13:39:40.869346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.071 [2024-10-14 13:39:40.869366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.071 [2024-10-14 13:39:40.869380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.071 [2024-10-14 13:39:40.869395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.071 [2024-10-14 13:39:40.869408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.071 [2024-10-14 13:39:40.869432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.071 [2024-10-14 13:39:40.869445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.071 [2024-10-14 13:39:40.869458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8ab0 is same with the state(6) to be set 00:28:49.071 [2024-10-14 13:39:40.869662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.071 [2024-10-14 13:39:40.869709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.071 [2024-10-14 13:39:40.869724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.071 [2024-10-14 13:39:40.869737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.071 [2024-10-14 13:39:40.869749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.869991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.870458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcde360 is same with the state(6) to be set 00:28:49.072 [2024-10-14 13:39:40.872189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.072 [2024-10-14 13:39:40.872602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.072 [2024-10-14 13:39:40.872615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.872974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.872988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.073 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1[2024-10-14 13:39:40.873173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 he state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 he state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 he state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 he state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128he state(6) to be set 00:28:49.073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128he state(6) to be set 00:28:49.073 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.073 [2024-10-14 13:39:40.873445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.073 [2024-10-14 13:39:40.873446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.073 [2024-10-14 13:39:40.873457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-10-14 13:39:40.873492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 he state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 he state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128he state(6) to be set 00:28:49.074 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.074 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 he state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.873665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 he state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12he state(6) to be set 00:28:49.074 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.074 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with the state(6) to be set 00:28:49.074 [2024-10-14 13:39:40.873885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12he state(6) to be set 00:28:49.074 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc2a0 is same with t[2024-10-14 13:39:40.873903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.074 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.873989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.874023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.874051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.874079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.874107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.074 [2024-10-14 13:39:40.874152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.074 [2024-10-14 13:39:40.874192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.074 [2024-10-14 13:39:40.874268] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15ae0e0 was disconnected and freed. reset controller. 00:28:49.075 [2024-10-14 13:39:40.874827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.874851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.874887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.874903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.874921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.874938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.874951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.874967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.874980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.874995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.075 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1he state(6) to be set 00:28:49.075 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1he state(6) to be set 00:28:49.075 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.075 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.075 [2024-10-14 13:39:40.875546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.075 [2024-10-14 13:39:40.875558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.075 [2024-10-14 13:39:40.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.875595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 he state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1he state(6) to be set 00:28:49.076 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-10-14 13:39:40.875763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 he state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.875777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 he state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(6) to be set 00:28:49.076 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-10-14 13:39:40.875946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 he state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.875959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.076 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.875986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.875992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.875998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.876007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with t[2024-10-14 13:39:40.876021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.076 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.876038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.876052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.876068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdc790 is same with the state(6) to be set 00:28:49.076 [2024-10-14 13:39:40.876081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.076 [2024-10-14 13:39:40.876256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.076 [2024-10-14 13:39:40.876269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.077 [2024-10-14 13:39:40.876470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.077 [2024-10-14 13:39:40.876484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.107 [2024-10-14 13:39:40.876753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.107 [2024-10-14 13:39:40.876826] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15acec0 was disconnected and freed. reset controller. 00:28:49.107 [2024-10-14 13:39:40.876997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.107 [2024-10-14 13:39:40.877433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.877791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdcc60 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.878423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:49.108 [2024-10-14 13:39:40.878497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8650 (9): Bad file descriptor 00:28:49.108 [2024-10-14 13:39:40.879142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.108 [2024-10-14 13:39:40.879828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.879916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd150 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.880042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.109 [2024-10-14 13:39:40.880073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8ab0 (9): Bad file descriptor 00:28:49.109 [2024-10-14 13:39:40.880194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180a930 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.880390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e230 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.880565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6760 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.880736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.109 [2024-10-14 13:39:40.880836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.109 [2024-10-14 13:39:40.880849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cdee0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.109 [2024-10-14 13:39:40.881830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.110 [2024-10-14 13:39:40.881965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8650 with addr=10.0.0.2, port=4420 00:28:49.110 [2024-10-14 13:39:40.881989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.881999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8650 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd4d0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882729] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.110 [2024-10-14 13:39:40.882844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.110 [2024-10-14 13:39:40.882871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8ab0 with addr=10.0.0.2, port=4420 00:28:49.110 [2024-10-14 13:39:40.882886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8ab0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.882911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8650 (9): Bad file descriptor 00:28:49.110 [2024-10-14 13:39:40.882989] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.110 [2024-10-14 13:39:40.883049] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.110 [2024-10-14 13:39:40.883138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8ab0 (9): Bad file descriptor 00:28:49.110 [2024-10-14 13:39:40.883393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with t[2024-10-14 13:39:40.883404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error he state(6) to be set 00:28:49.110 state 00:28:49.110 [2024-10-14 13:39:40.883425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] contr[2024-10-14 13:39:40.883427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with toller reinitialization failed 00:28:49.110 he state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:49.110 [2024-10-14 13:39:40.883452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883541] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.110 [2024-10-14 13:39:40.883552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.110 [2024-10-14 13:39:40.883745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.111 [2024-10-14 13:39:40.883897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.111 [2024-10-14 13:39:40.883920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with t[2024-10-14 13:39:40.883925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] contrhe state(6) to be set 00:28:49.111 oller reinitialization failed 00:28:49.111 [2024-10-14 13:39:40.883942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.111 [2024-10-14 13:39:40.883944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.883989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884025] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.111 [2024-10-14 13:39:40.884036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdd9a0 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.111 [2024-10-14 13:39:40.884527] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:49.111 [2024-10-14 13:39:40.884897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.884984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-10-14 13:39:40.885335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 he state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 [2024-10-14 13:39:40.885369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 [2024-10-14 13:39:40.885404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 [2024-10-14 13:39:40.885446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.885470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 he state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 [2024-10-14 13:39:40.885507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.885532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.111 he state(6) to be set 00:28:49.111 [2024-10-14 13:39:40.885545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with t[2024-10-14 13:39:40.885547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128he state(6) to be set 00:28:49.111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.111 [2024-10-14 13:39:40.885563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with t[2024-10-14 13:39:40.885564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:49.111 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-10-14 13:39:40.885614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 he state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.885627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 he state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 13:39:40.885692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 he state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdde70 is same with the state(6) to be set 00:28:49.112 [2024-10-14 13:39:40.885750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.885979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.885994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.112 [2024-10-14 13:39:40.886613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.112 [2024-10-14 13:39:40.886628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.886978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.886992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.887275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.887289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f4600 is same with the state(6) to be set 00:28:49.113 [2024-10-14 13:39:40.887361] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26f4600 was disconnected and freed. reset controller. 00:28:49.113 [2024-10-14 13:39:40.888660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:49.113 [2024-10-14 13:39:40.888722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1802d80 (9): Bad file descriptor 00:28:49.113 [2024-10-14 13:39:40.889469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.889975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.889991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.890004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.113 [2024-10-14 13:39:40.890019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.113 [2024-10-14 13:39:40.890033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.114 [2024-10-14 13:39:40.890437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1642ac0 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.890522] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1642ac0 was disconnected and freed. reset controller. 00:28:49.114 [2024-10-14 13:39:40.890641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.114 [2024-10-14 13:39:40.890672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1802d80 with addr=10.0.0.2, port=4420 00:28:49.114 [2024-10-14 13:39:40.890689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1802d80 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.890730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139de70 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.890909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.890982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.890995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.891008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.891020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808110 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.891041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180a930 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.891092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.891147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.891169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.891183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.891196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.891214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.891228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.114 [2024-10-14 13:39:40.891240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.114 [2024-10-14 13:39:40.891253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e8a0 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.891283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e230 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.891312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c6760 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.891342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cdee0 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.892432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:49.114 [2024-10-14 13:39:40.892463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808110 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.892485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1802d80 (9): Bad file descriptor 00:28:49.114 [2024-10-14 13:39:40.892568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:49.114 [2024-10-14 13:39:40.892615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:49.114 [2024-10-14 13:39:40.892633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:49.114 [2024-10-14 13:39:40.892648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:49.114 [2024-10-14 13:39:40.892967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.114 [2024-10-14 13:39:40.892990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.114 [2024-10-14 13:39:40.893082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.114 [2024-10-14 13:39:40.893109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1808110 with addr=10.0.0.2, port=4420 00:28:49.114 [2024-10-14 13:39:40.893125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808110 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.893229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.114 [2024-10-14 13:39:40.893254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8650 with addr=10.0.0.2, port=4420 00:28:49.114 [2024-10-14 13:39:40.893270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8650 is same with the state(6) to be set 00:28:49.114 [2024-10-14 13:39:40.893449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.114 [2024-10-14 13:39:40.893475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8ab0 with addr=10.0.0.2, port=4420 00:28:49.115 [2024-10-14 13:39:40.893491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8ab0 is same with the state(6) to be set 00:28:49.115 [2024-10-14 13:39:40.893509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808110 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.893527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8650 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.893590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8ab0 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.893612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:49.115 [2024-10-14 13:39:40.893630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:49.115 [2024-10-14 13:39:40.893643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:49.115 [2024-10-14 13:39:40.893662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:49.115 [2024-10-14 13:39:40.893676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:49.115 [2024-10-14 13:39:40.893688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:49.115 [2024-10-14 13:39:40.893739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.115 [2024-10-14 13:39:40.893756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.115 [2024-10-14 13:39:40.893769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.115 [2024-10-14 13:39:40.893780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.115 [2024-10-14 13:39:40.893792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.115 [2024-10-14 13:39:40.893843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.115 [2024-10-14 13:39:40.899282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:49.115 [2024-10-14 13:39:40.899521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.115 [2024-10-14 13:39:40.899559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1802d80 with addr=10.0.0.2, port=4420 00:28:49.115 [2024-10-14 13:39:40.899576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1802d80 is same with the state(6) to be set 00:28:49.115 [2024-10-14 13:39:40.899633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1802d80 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.899686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:49.115 [2024-10-14 13:39:40.899702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:49.115 [2024-10-14 13:39:40.899717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:49.115 [2024-10-14 13:39:40.899770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.115 [2024-10-14 13:39:40.900604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139de70 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.900653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e8a0 (9): Bad file descriptor 00:28:49.115 [2024-10-14 13:39:40.900805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.900829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.900860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.900875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.900892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.900906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.900922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.900952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.900969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.900984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.900999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.115 [2024-10-14 13:39:40.901737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.115 [2024-10-14 13:39:40.901750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.901981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.901996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.902010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.902041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.902056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.902070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.902089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.902152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.902167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ae9d0 is same with the state(6) to be set 00:28:49.116 [2024-10-14 13:39:40.903363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.903973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.903986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.116 [2024-10-14 13:39:40.904189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.116 [2024-10-14 13:39:40.904203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.904971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.904987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.905277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.905290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17afef0 is same with the state(6) to be set 00:28:49.117 [2024-10-14 13:39:40.906519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.906542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.906561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.906576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.906592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.906605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.906621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.117 [2024-10-14 13:39:40.906634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.117 [2024-10-14 13:39:40.906650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.906972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.906986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.118 [2024-10-14 13:39:40.907515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.118 [2024-10-14 13:39:40.907531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.907975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.907991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.908393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.908407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200b1f0 is same with the state(6) to be set 00:28:49.119 [2024-10-14 13:39:40.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.119 [2024-10-14 13:39:40.909976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.119 [2024-10-14 13:39:40.909990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.910969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.910983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.120 [2024-10-14 13:39:40.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.120 [2024-10-14 13:39:40.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.121 [2024-10-14 13:39:40.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.121 [2024-10-14 13:39:40.911601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1643ff0 is same with the state(6) to be set 00:28:49.383 [2024-10-14 13:39:40.912814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:49.383 [2024-10-14 13:39:40.912846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:49.383 [2024-10-14 13:39:40.912864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:49.383 [2024-10-14 13:39:40.912882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:49.383 [2024-10-14 13:39:40.913289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.383 [2024-10-14 13:39:40.913319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139e230 with addr=10.0.0.2, port=4420 00:28:49.383 [2024-10-14 13:39:40.913335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e230 is same with the state(6) to be set 00:28:49.383 [2024-10-14 13:39:40.913434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.383 [2024-10-14 13:39:40.913458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cdee0 with addr=10.0.0.2, port=4420 00:28:49.383 [2024-10-14 13:39:40.913474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cdee0 is same with the state(6) to be set 00:28:49.383 [2024-10-14 13:39:40.913565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.383 [2024-10-14 13:39:40.913595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6760 with addr=10.0.0.2, port=4420 00:28:49.383 [2024-10-14 13:39:40.913612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6760 is same with the state(6) to be set 00:28:49.383 [2024-10-14 13:39:40.913709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.383 [2024-10-14 13:39:40.913733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180a930 with addr=10.0.0.2, port=4420 00:28:49.383 [2024-10-14 13:39:40.913748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180a930 is same with the state(6) to be set 00:28:49.383 [2024-10-14 13:39:40.914637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.914977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.914991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.383 [2024-10-14 13:39:40.915382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.383 [2024-10-14 13:39:40.915398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.915976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.915990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.916235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.916249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2258ce0 is same with the state(6) to be set 00:28:49.384 [2024-10-14 13:39:40.917448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.384 [2024-10-14 13:39:40.917771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.384 [2024-10-14 13:39:40.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.917970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.917985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.385 [2024-10-14 13:39:40.918998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.385 [2024-10-14 13:39:40.919012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.386 [2024-10-14 13:39:40.919339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.386 [2024-10-14 13:39:40.919353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a69d0 is same with the state(6) to be set 00:28:49.386 [2024-10-14 13:39:40.921255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:49.386 [2024-10-14 13:39:40.921288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:49.386 [2024-10-14 13:39:40.921306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.386 [2024-10-14 13:39:40.921323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:49.386 [2024-10-14 13:39:40.921339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:49.386 task offset: 20224 on job bdev=Nvme2n1 fails 00:28:49.386 00:28:49.386 Latency(us) 00:28:49.386 [2024-10-14T11:39:41.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme1n1 ended in about 0.83 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme1n1 : 0.83 232.59 14.54 77.53 0.00 203761.40 6844.87 223696.21 00:28:49.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme2n1 ended in about 0.82 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme2n1 : 0.82 155.37 9.71 77.68 0.00 265067.01 5000.15 295154.73 00:28:49.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme3n1 ended in about 0.85 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme3n1 : 0.85 175.49 10.97 50.64 0.00 265048.11 33204.91 253211.69 00:28:49.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme4n1 ended in about 0.85 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme4n1 : 0.85 150.19 9.39 75.10 0.00 262338.94 18932.62 259425.47 00:28:49.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme5n1 ended in about 0.86 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme5n1 : 0.86 149.65 9.35 74.82 0.00 257250.04 20874.43 264085.81 00:28:49.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme6n1 ended in about 0.86 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme6n1 : 0.86 159.88 9.99 62.56 0.00 252020.56 18641.35 243891.01 00:28:49.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme7n1 ended in about 0.87 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme7n1 : 0.87 147.76 9.24 73.88 0.00 248669.17 18641.35 256318.58 00:28:49.386 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme8n1 ended in about 0.83 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme8n1 : 0.83 153.39 9.59 76.70 0.00 232162.80 33981.63 239230.67 00:28:49.386 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme9n1 ended in about 0.84 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme9n1 : 0.84 190.87 11.93 38.17 0.00 227142.42 24466.77 253211.69 00:28:49.386 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.386 Job: Nvme10n1 ended in about 0.86 seconds with error 00:28:49.386 Verification LBA range: start 0x0 length 0x400 00:28:49.386 Nvme10n1 : 0.86 149.09 9.32 74.55 0.00 228340.12 21942.42 259425.47 00:28:49.386 [2024-10-14T11:39:41.239Z] =================================================================================================================== 00:28:49.386 [2024-10-14T11:39:41.239Z] Total : 1664.29 104.02 681.64 0.00 242876.23 5000.15 295154.73 00:28:49.386 [2024-10-14 13:39:40.946904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:49.386 [2024-10-14 13:39:40.946988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:49.386 [2024-10-14 13:39:40.947116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e230 (9): Bad file descriptor 00:28:49.386 [2024-10-14 13:39:40.947154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cdee0 (9): Bad file descriptor 00:28:49.386 [2024-10-14 13:39:40.947175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c6760 (9): Bad file descriptor 00:28:49.386 [2024-10-14 13:39:40.947193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180a930 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.947798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.947835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8650 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.947856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8650 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.948149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1808110 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.948175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808110 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.948289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a8ab0 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.948305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8ab0 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.948423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1802d80 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.948440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1802d80 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.948567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139de70 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.948583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139de70 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.387 [2024-10-14 13:39:40.948690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139e8a0 with addr=10.0.0.2, port=4420 00:28:49.387 [2024-10-14 13:39:40.948705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e8a0 is same with the state(6) to be set 00:28:49.387 [2024-10-14 13:39:40.948720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.948734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.948750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:49.387 [2024-10-14 13:39:40.948772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.948786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.948799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:49.387 [2024-10-14 13:39:40.948817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.948831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.948843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:49.387 [2024-10-14 13:39:40.948861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.948874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.948887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:49.387 [2024-10-14 13:39:40.948920] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.387 [2024-10-14 13:39:40.948942] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.387 [2024-10-14 13:39:40.948961] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.387 [2024-10-14 13:39:40.948978] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.387 [2024-10-14 13:39:40.949630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.949660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.949673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.949684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.949701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8650 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1808110 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a8ab0 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1802d80 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139de70 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139e8a0 (9): Bad file descriptor 00:28:49.387 [2024-10-14 13:39:40.949851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.949870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.949884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:49.387 [2024-10-14 13:39:40.949900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.949914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.949927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:49.387 [2024-10-14 13:39:40.949943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.949956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.949968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.387 [2024-10-14 13:39:40.949984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.949997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.950010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:49.387 [2024-10-14 13:39:40.950026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.950039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.950051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:49.387 [2024-10-14 13:39:40.950067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:49.387 [2024-10-14 13:39:40.950080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:49.387 [2024-10-14 13:39:40.950092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:49.387 [2024-10-14 13:39:40.950155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.950175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.950187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.950203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.387 [2024-10-14 13:39:40.950214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.388 [2024-10-14 13:39:40.950226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.648 13:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:50.585 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 326091 00:28:50.585 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:28:50.585 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 326091 00:28:50.585 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 326091 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.586 rmmod nvme_tcp 00:28:50.586 rmmod nvme_fabrics 00:28:50.586 rmmod nvme_keyring 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 326028 ']' 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 326028 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 326028 ']' 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 326028 00:28:50.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (326028) - No such process 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 326028 is not found' 00:28:50.586 Process with pid 326028 is not found 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.586 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.115 00:28:53.115 real 0m6.863s 00:28:53.115 user 0m15.600s 00:28:53.115 sys 0m1.370s 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.115 ************************************ 00:28:53.115 END TEST nvmf_shutdown_tc3 00:28:53.115 ************************************ 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:53.115 ************************************ 00:28:53.115 START TEST nvmf_shutdown_tc4 00:28:53.115 ************************************ 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:53.115 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:53.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:53.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:53.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:53.115 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:28:53.116 00:28:53.116 --- 10.0.0.2 ping statistics --- 00:28:53.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.116 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:28:53.116 00:28:53.116 --- 10.0.0.1 ping statistics --- 00:28:53.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.116 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=326980 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 326980 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 326980 ']' 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.116 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.116 [2024-10-14 13:39:44.749196] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:28:53.116 [2024-10-14 13:39:44.749268] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.116 [2024-10-14 13:39:44.815976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.116 [2024-10-14 13:39:44.862910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.116 [2024-10-14 13:39:44.862964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.116 [2024-10-14 13:39:44.862992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.116 [2024-10-14 13:39:44.863003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.116 [2024-10-14 13:39:44.863012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.116 [2024-10-14 13:39:44.864479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.116 [2024-10-14 13:39:44.864541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.116 [2024-10-14 13:39:44.864607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.116 [2024-10-14 13:39:44.864604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.376 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:53.376 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:28:53.376 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:53.376 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.376 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.376 [2024-10-14 13:39:45.007367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.376 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.376 Malloc1 00:28:53.376 [2024-10-14 13:39:45.090012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.376 Malloc2 00:28:53.376 Malloc3 00:28:53.376 Malloc4 00:28:53.635 Malloc5 00:28:53.635 Malloc6 00:28:53.635 Malloc7 00:28:53.635 Malloc8 00:28:53.635 Malloc9 00:28:53.894 Malloc10 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=327042 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:53.894 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:53.895 [2024-10-14 13:39:45.616256] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 326980 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 326980 ']' 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 326980 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 326980 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 326980' 00:28:59.172 killing process with pid 326980 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 326980 00:28:59.172 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 326980 00:28:59.172 [2024-10-14 13:39:50.609846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.609961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.609985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.610000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.610013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.610024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.610040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce190 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.172 [2024-10-14 13:39:50.611927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ceb50 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.612979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.613109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdcc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.616320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.618550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3dc0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.619682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f510 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.619715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233f510 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.620311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d3420 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.626621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23410a0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.627402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341570 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.628914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341a40 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 [2024-10-14 13:39:50.629851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 [2024-10-14 13:39:50.629882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with Write completed with error (sct=0, sc=8) 00:28:59.173 the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.629900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 [2024-10-14 13:39:50.629913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 [2024-10-14 13:39:50.629926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 [2024-10-14 13:39:50.629937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 starting I/O failed: -6 00:28:59.173 [2024-10-14 13:39:50.629949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 [2024-10-14 13:39:50.629961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 [2024-10-14 13:39:50.629973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2340bd0 is same with the state(6) to be set 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.173 starting I/O failed: -6 00:28:59.173 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 [2024-10-14 13:39:50.630436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 [2024-10-14 13:39:50.631663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 [2024-10-14 13:39:50.632874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 [2024-10-14 13:39:50.633583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with the state(6) to be set 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.174 starting I/O failed: -6 00:28:59.174 [2024-10-14 13:39:50.633613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with the state(6) to be set 00:28:59.174 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.633628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with the state(6) to be set 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.633640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.633654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with starting I/O failed: -6 00:28:59.175 the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.633667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2342d80 is same with Write completed with error (sct=0, sc=8) 00:28:59.175 the state(6) to be set 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.634019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.634047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.634062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.634074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.634086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.634098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with starting I/O failed: -6 00:28:59.175 the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.634121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.634143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.634156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341f10 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.634363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.175 NVMe io qpair process completion error 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.635605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.175 [2024-10-14 13:39:50.635702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.635733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.635747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.635759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.635770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.635783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22437a0 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.636096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.636121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.636153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 [2024-10-14 13:39:50.636200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with the state(6) to be set 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 [2024-10-14 13:39:50.636216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with the state(6) to be set 00:28:59.175 [2024-10-14 13:39:50.636228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd7d0 is same with Write completed with error (sct=0, sc=8) 00:28:59.175 the state(6) to be set 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.175 starting I/O failed: -6 00:28:59.175 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 [2024-10-14 13:39:50.636583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with Write completed with error (sct=0, sc=8) 00:28:59.176 the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with Write completed with error (sct=0, sc=8) 00:28:59.176 the state(6) to be set 00:28:59.176 starting I/O failed: -6 00:28:59.176 [2024-10-14 13:39:50.636625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 [2024-10-14 13:39:50.636637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 starting I/O failed: -6 00:28:59.176 [2024-10-14 13:39:50.636649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 [2024-10-14 13:39:50.636661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with [2024-10-14 13:39:50.636690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devithe state(6) to be set 00:28:59.176 ce or address) on qpair id 3 00:28:59.176 [2024-10-14 13:39:50.636710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 [2024-10-14 13:39:50.636755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22430a0 is same with the state(6) to be set 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 [2024-10-14 13:39:50.637871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.176 Write completed with error (sct=0, sc=8) 00:28:59.176 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 [2024-10-14 13:39:50.639706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.177 NVMe io qpair process completion error 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 [2024-10-14 13:39:50.640998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 [2024-10-14 13:39:50.642075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.177 Write completed with error (sct=0, sc=8) 00:28:59.177 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 [2024-10-14 13:39:50.643234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 [2024-10-14 13:39:50.645250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.178 NVMe io qpair process completion error 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 [2024-10-14 13:39:50.646599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.178 starting I/O failed: -6 00:28:59.178 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 [2024-10-14 13:39:50.647668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 [2024-10-14 13:39:50.648829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.179 starting I/O failed: -6 00:28:59.179 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 [2024-10-14 13:39:50.650748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.180 NVMe io qpair process completion error 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 [2024-10-14 13:39:50.652161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 [2024-10-14 13:39:50.653105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.180 [2024-10-14 13:39:50.654252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.180 Write completed with error (sct=0, sc=8) 00:28:59.180 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 [2024-10-14 13:39:50.657074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.181 NVMe io qpair process completion error 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 Write completed with error (sct=0, sc=8) 00:28:59.181 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.182 Write completed with error (sct=0, sc=8) 00:28:59.182 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 [2024-10-14 13:39:50.661822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.183 NVMe io qpair process completion error 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 [2024-10-14 13:39:50.663099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 [2024-10-14 13:39:50.664182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 [2024-10-14 13:39:50.665265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.183 starting I/O failed: -6 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.183 Write completed with error (sct=0, sc=8) 00:28:59.183 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 [2024-10-14 13:39:50.667292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.184 NVMe io qpair process completion error 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 [2024-10-14 13:39:50.668577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 [2024-10-14 13:39:50.669597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.184 starting I/O failed: -6 00:28:59.184 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 [2024-10-14 13:39:50.670730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 [2024-10-14 13:39:50.672685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.185 NVMe io qpair process completion error 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 starting I/O failed: -6 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.185 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 [2024-10-14 13:39:50.673905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 [2024-10-14 13:39:50.674944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 [2024-10-14 13:39:50.676098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.186 starting I/O failed: -6 00:28:59.186 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 [2024-10-14 13:39:50.679091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.187 NVMe io qpair process completion error 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 [2024-10-14 13:39:50.680616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 [2024-10-14 13:39:50.681634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.187 starting I/O failed: -6 00:28:59.187 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 [2024-10-14 13:39:50.682783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 Write completed with error (sct=0, sc=8) 00:28:59.188 starting I/O failed: -6 00:28:59.188 [2024-10-14 13:39:50.686146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.188 NVMe io qpair process completion error 00:28:59.188 Initializing NVMe Controllers 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:59.188 Controller IO queue size 128, less than required. 00:28:59.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:59.189 Controller IO queue size 128, less than required. 00:28:59.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:59.189 Controller IO queue size 128, less than required. 00:28:59.189 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:59.189 Initialization complete. Launching workers. 00:28:59.189 ======================================================== 00:28:59.189 Latency(us) 00:28:59.189 Device Information : IOPS MiB/s Average min max 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1790.76 76.95 71503.36 922.04 138097.12 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1760.14 75.63 71926.05 1231.69 150431.77 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1756.79 75.49 72082.92 1103.49 123814.59 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1805.43 77.58 70164.23 922.85 123026.68 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1776.71 76.34 71325.86 982.25 120335.54 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1739.38 74.74 72883.66 985.60 124532.79 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1806.27 77.61 70211.32 854.97 127295.58 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1770.21 76.06 71664.19 926.08 129179.83 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1766.43 75.90 71841.86 1055.62 131859.86 00:28:59.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1760.35 75.64 72115.88 1097.17 119893.04 00:28:59.189 ======================================================== 00:28:59.189 Total : 17732.47 761.94 71562.97 854.97 150431.77 00:28:59.189 00:28:59.189 [2024-10-14 13:39:50.692408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028200 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102c160 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025b20 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10263a0 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026a00 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028530 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10261c0 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028860 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10266d0 is same with the state(6) to be set 00:28:59.189 [2024-10-14 13:39:50.692956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027ed0 is same with the state(6) to be set 00:28:59.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:59.448 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 327042 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 327042 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 327042 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.387 rmmod nvme_tcp 00:29:00.387 rmmod nvme_fabrics 00:29:00.387 rmmod nvme_keyring 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:00.387 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 326980 ']' 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 326980 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 326980 ']' 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 326980 00:29:00.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (326980) - No such process 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 326980 is not found' 00:29:00.388 Process with pid 326980 is not found 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.388 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.926 00:29:02.926 real 0m9.677s 00:29:02.926 user 0m23.721s 00:29:02.926 sys 0m5.592s 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:02.926 ************************************ 00:29:02.926 END TEST nvmf_shutdown_tc4 00:29:02.926 ************************************ 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:02.926 00:29:02.926 real 0m36.256s 00:29:02.926 user 1m35.535s 00:29:02.926 sys 0m11.969s 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:02.926 ************************************ 00:29:02.926 END TEST nvmf_shutdown 00:29:02.926 ************************************ 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:02.926 00:29:02.926 real 17m53.850s 00:29:02.926 user 49m57.956s 00:29:02.926 sys 3m52.661s 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.926 13:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:02.926 ************************************ 00:29:02.926 END TEST nvmf_target_extra 00:29:02.926 ************************************ 00:29:02.926 13:39:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:02.926 13:39:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.926 13:39:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.926 13:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.926 ************************************ 00:29:02.926 START TEST nvmf_host 00:29:02.926 ************************************ 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:02.926 * Looking for test storage... 00:29:02.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.926 --rc genhtml_branch_coverage=1 00:29:02.926 --rc genhtml_function_coverage=1 00:29:02.926 --rc genhtml_legend=1 00:29:02.926 --rc geninfo_all_blocks=1 00:29:02.926 --rc geninfo_unexecuted_blocks=1 00:29:02.926 00:29:02.926 ' 00:29:02.926 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.926 --rc genhtml_branch_coverage=1 00:29:02.926 --rc genhtml_function_coverage=1 00:29:02.926 --rc genhtml_legend=1 00:29:02.926 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.927 ************************************ 00:29:02.927 START TEST nvmf_multicontroller 00:29:02.927 ************************************ 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:02.927 * Looking for test storage... 00:29:02.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.927 --rc geninfo_unexecuted_blocks=1 00:29:02.927 00:29:02.927 ' 00:29:02.927 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.927 --rc genhtml_branch_coverage=1 00:29:02.927 --rc genhtml_function_coverage=1 00:29:02.927 --rc genhtml_legend=1 00:29:02.927 --rc geninfo_all_blocks=1 00:29:02.928 --rc geninfo_unexecuted_blocks=1 00:29:02.928 00:29:02.928 ' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.928 13:39:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.832 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.833 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.833 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:29:05.092 00:29:05.092 --- 10.0.0.2 ping statistics --- 00:29:05.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.092 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:05.092 00:29:05.092 --- 10.0.0.1 ping statistics --- 00:29:05.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.092 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:05.092 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=329834 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 329834 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 329834 ']' 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.093 13:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.093 [2024-10-14 13:39:56.769686] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:05.093 [2024-10-14 13:39:56.769764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.093 [2024-10-14 13:39:56.834398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.093 [2024-10-14 13:39:56.882989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.093 [2024-10-14 13:39:56.883044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.093 [2024-10-14 13:39:56.883058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.093 [2024-10-14 13:39:56.883068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.093 [2024-10-14 13:39:56.883077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.093 [2024-10-14 13:39:56.884646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.093 [2024-10-14 13:39:56.884705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.093 [2024-10-14 13:39:56.884708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 [2024-10-14 13:39:57.032980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 Malloc0 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 [2024-10-14 13:39:57.094281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:05.352 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 [2024-10-14 13:39:57.102140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 Malloc1 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=329882 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 329882 /var/tmp/bdevperf.sock 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 329882 ']' 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.353 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.611 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:05.611 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:05.611 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:05.611 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.611 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 NVMe0n1 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.869 1 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 request: 00:29:05.869 { 00:29:05.869 "name": "NVMe0", 00:29:05.869 "trtype": "tcp", 00:29:05.869 "traddr": "10.0.0.2", 00:29:05.869 "adrfam": "ipv4", 00:29:05.869 "trsvcid": "4420", 00:29:05.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.869 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:05.869 "hostaddr": "10.0.0.1", 00:29:05.869 "prchk_reftag": false, 00:29:05.869 "prchk_guard": false, 00:29:05.869 "hdgst": false, 00:29:05.869 "ddgst": false, 00:29:05.869 "allow_unrecognized_csi": false, 00:29:05.869 "method": "bdev_nvme_attach_controller", 00:29:05.869 "req_id": 1 00:29:05.869 } 00:29:05.869 Got JSON-RPC error response 00:29:05.869 response: 00:29:05.869 { 00:29:05.869 "code": -114, 00:29:05.869 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:05.869 } 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.869 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:05.870 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.870 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.127 request: 00:29:06.127 { 00:29:06.127 "name": "NVMe0", 00:29:06.127 "trtype": "tcp", 00:29:06.127 "traddr": "10.0.0.2", 00:29:06.127 "adrfam": "ipv4", 00:29:06.128 "trsvcid": "4420", 00:29:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.128 "hostaddr": "10.0.0.1", 00:29:06.128 "prchk_reftag": false, 00:29:06.128 "prchk_guard": false, 00:29:06.128 "hdgst": false, 00:29:06.128 "ddgst": false, 00:29:06.128 "allow_unrecognized_csi": false, 00:29:06.128 "method": "bdev_nvme_attach_controller", 00:29:06.128 "req_id": 1 00:29:06.128 } 00:29:06.128 Got JSON-RPC error response 00:29:06.128 response: 00:29:06.128 { 00:29:06.128 "code": -114, 00:29:06.128 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:06.128 } 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.128 request: 00:29:06.128 { 00:29:06.128 "name": "NVMe0", 00:29:06.128 "trtype": "tcp", 00:29:06.128 "traddr": "10.0.0.2", 00:29:06.128 "adrfam": "ipv4", 00:29:06.128 "trsvcid": "4420", 00:29:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.128 "hostaddr": "10.0.0.1", 00:29:06.128 "prchk_reftag": false, 00:29:06.128 "prchk_guard": false, 00:29:06.128 "hdgst": false, 00:29:06.128 "ddgst": false, 00:29:06.128 "multipath": "disable", 00:29:06.128 "allow_unrecognized_csi": false, 00:29:06.128 "method": "bdev_nvme_attach_controller", 00:29:06.128 "req_id": 1 00:29:06.128 } 00:29:06.128 Got JSON-RPC error response 00:29:06.128 response: 00:29:06.128 { 00:29:06.128 "code": -114, 00:29:06.128 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:06.128 } 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.128 request: 00:29:06.128 { 00:29:06.128 "name": "NVMe0", 00:29:06.128 "trtype": "tcp", 00:29:06.128 "traddr": "10.0.0.2", 00:29:06.128 "adrfam": "ipv4", 00:29:06.128 "trsvcid": "4420", 00:29:06.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.128 "hostaddr": "10.0.0.1", 00:29:06.128 "prchk_reftag": false, 00:29:06.128 "prchk_guard": false, 00:29:06.128 "hdgst": false, 00:29:06.128 "ddgst": false, 00:29:06.128 "multipath": "failover", 00:29:06.128 "allow_unrecognized_csi": false, 00:29:06.128 "method": "bdev_nvme_attach_controller", 00:29:06.128 "req_id": 1 00:29:06.128 } 00:29:06.128 Got JSON-RPC error response 00:29:06.128 response: 00:29:06.128 { 00:29:06.128 "code": -114, 00:29:06.128 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:06.128 } 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.128 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.386 NVMe0n1 00:29:06.386 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.386 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:06.386 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.386 13:39:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.386 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:06.386 13:39:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:07.355 { 00:29:07.355 "results": [ 00:29:07.355 { 00:29:07.355 "job": "NVMe0n1", 00:29:07.355 "core_mask": "0x1", 00:29:07.355 "workload": "write", 00:29:07.355 "status": "finished", 00:29:07.355 "queue_depth": 128, 00:29:07.355 "io_size": 4096, 00:29:07.355 "runtime": 1.004472, 00:29:07.355 "iops": 18276.2685271466, 00:29:07.355 "mibps": 71.3916739341664, 00:29:07.355 "io_failed": 0, 00:29:07.355 "io_timeout": 0, 00:29:07.355 "avg_latency_us": 6992.251284050146, 00:29:07.355 "min_latency_us": 4344.794074074074, 00:29:07.355 "max_latency_us": 17282.085925925927 00:29:07.355 } 00:29:07.355 ], 00:29:07.355 "core_count": 1 00:29:07.355 } 00:29:07.355 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:07.355 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.355 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 329882 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 329882 ']' 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 329882 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329882 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.612 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329882' 00:29:07.612 killing process with pid 329882 00:29:07.613 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 329882 00:29:07.613 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 329882 00:29:07.613 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.613 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.613 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:07.871 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.871 [2024-10-14 13:39:57.215626] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:07.871 [2024-10-14 13:39:57.215727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329882 ] 00:29:07.871 [2024-10-14 13:39:57.280928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.871 [2024-10-14 13:39:57.327752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.871 [2024-10-14 13:39:58.064621] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 3deb844b-f326-4ea5-9640-86c8dc0fe253 already exists 00:29:07.871 [2024-10-14 13:39:58.064660] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:3deb844b-f326-4ea5-9640-86c8dc0fe253 alias for bdev NVMe1n1 00:29:07.871 [2024-10-14 13:39:58.064675] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:07.871 Running I/O for 1 seconds... 00:29:07.871 18230.00 IOPS, 71.21 MiB/s 00:29:07.871 Latency(us) 00:29:07.871 [2024-10-14T11:39:59.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.871 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:07.871 NVMe0n1 : 1.00 18276.27 71.39 0.00 0.00 6992.25 4344.79 17282.09 00:29:07.871 [2024-10-14T11:39:59.724Z] =================================================================================================================== 00:29:07.871 [2024-10-14T11:39:59.724Z] Total : 18276.27 71.39 0.00 0.00 6992.25 4344.79 17282.09 00:29:07.871 Received shutdown signal, test time was about 1.000000 seconds 00:29:07.871 00:29:07.871 Latency(us) 00:29:07.871 [2024-10-14T11:39:59.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.871 [2024-10-14T11:39:59.724Z] =================================================================================================================== 00:29:07.871 [2024-10-14T11:39:59.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.871 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.871 rmmod nvme_tcp 00:29:07.871 rmmod nvme_fabrics 00:29:07.871 rmmod nvme_keyring 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 329834 ']' 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 329834 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 329834 ']' 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 329834 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 329834 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 329834' 00:29:07.871 killing process with pid 329834 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 329834 00:29:07.871 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 329834 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.129 13:39:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.665 00:29:10.665 real 0m7.428s 00:29:10.665 user 0m11.926s 00:29:10.665 sys 0m2.325s 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:10.665 ************************************ 00:29:10.665 END TEST nvmf_multicontroller 00:29:10.665 ************************************ 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.665 ************************************ 00:29:10.665 START TEST nvmf_aer 00:29:10.665 ************************************ 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:10.665 * Looking for test storage... 00:29:10.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:29:10.665 13:40:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.665 --rc genhtml_branch_coverage=1 00:29:10.665 --rc genhtml_function_coverage=1 00:29:10.665 --rc genhtml_legend=1 00:29:10.665 --rc geninfo_all_blocks=1 00:29:10.665 --rc geninfo_unexecuted_blocks=1 00:29:10.665 00:29:10.665 ' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.665 --rc genhtml_branch_coverage=1 00:29:10.665 --rc genhtml_function_coverage=1 00:29:10.665 --rc genhtml_legend=1 00:29:10.665 --rc geninfo_all_blocks=1 00:29:10.665 --rc geninfo_unexecuted_blocks=1 00:29:10.665 00:29:10.665 ' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.665 --rc genhtml_branch_coverage=1 00:29:10.665 --rc genhtml_function_coverage=1 00:29:10.665 --rc genhtml_legend=1 00:29:10.665 --rc geninfo_all_blocks=1 00:29:10.665 --rc geninfo_unexecuted_blocks=1 00:29:10.665 00:29:10.665 ' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:10.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.665 --rc genhtml_branch_coverage=1 00:29:10.665 --rc genhtml_function_coverage=1 00:29:10.665 --rc genhtml_legend=1 00:29:10.665 --rc geninfo_all_blocks=1 00:29:10.665 --rc geninfo_unexecuted_blocks=1 00:29:10.665 00:29:10.665 ' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:10.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.665 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.666 13:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:12.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:12.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:12.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:12.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.571 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:29:12.572 00:29:12.572 --- 10.0.0.2 ping statistics --- 00:29:12.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.572 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:12.572 00:29:12.572 --- 10.0.0.1 ping statistics --- 00:29:12.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.572 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=332195 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 332195 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 332195 ']' 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.572 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.830 [2024-10-14 13:40:04.436483] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:12.830 [2024-10-14 13:40:04.436554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.830 [2024-10-14 13:40:04.511462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.830 [2024-10-14 13:40:04.564033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.830 [2024-10-14 13:40:04.564088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.830 [2024-10-14 13:40:04.564119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.830 [2024-10-14 13:40:04.564174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.830 [2024-10-14 13:40:04.564192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.830 [2024-10-14 13:40:04.566056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.830 [2024-10-14 13:40:04.566123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.830 [2024-10-14 13:40:04.566181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.830 [2024-10-14 13:40:04.566188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 [2024-10-14 13:40:04.716845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 Malloc0 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 [2024-10-14 13:40:04.782190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.088 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.088 [ 00:29:13.088 { 00:29:13.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:13.088 "subtype": "Discovery", 00:29:13.088 "listen_addresses": [], 00:29:13.088 "allow_any_host": true, 00:29:13.088 "hosts": [] 00:29:13.088 }, 00:29:13.088 { 00:29:13.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.088 "subtype": "NVMe", 00:29:13.088 "listen_addresses": [ 00:29:13.088 { 00:29:13.088 "trtype": "TCP", 00:29:13.088 "adrfam": "IPv4", 00:29:13.088 "traddr": "10.0.0.2", 00:29:13.088 "trsvcid": "4420" 00:29:13.088 } 00:29:13.088 ], 00:29:13.089 "allow_any_host": true, 00:29:13.089 "hosts": [], 00:29:13.089 "serial_number": "SPDK00000000000001", 00:29:13.089 "model_number": "SPDK bdev Controller", 00:29:13.089 "max_namespaces": 2, 00:29:13.089 "min_cntlid": 1, 00:29:13.089 "max_cntlid": 65519, 00:29:13.089 "namespaces": [ 00:29:13.089 { 00:29:13.089 "nsid": 1, 00:29:13.089 "bdev_name": "Malloc0", 00:29:13.089 "name": "Malloc0", 00:29:13.089 "nguid": "A367C4191BCE4507B81363B50DD6578F", 00:29:13.089 "uuid": "a367c419-1bce-4507-b813-63b50dd6578f" 00:29:13.089 } 00:29:13.089 ] 00:29:13.089 } 00:29:13.089 ] 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=332224 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:13.089 13:40:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.347 Malloc1 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.347 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.347 Asynchronous Event Request test 00:29:13.347 Attaching to 10.0.0.2 00:29:13.347 Attached to 10.0.0.2 00:29:13.347 Registering asynchronous event callbacks... 00:29:13.347 Starting namespace attribute notice tests for all controllers... 00:29:13.347 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:13.347 aer_cb - Changed Namespace 00:29:13.347 Cleaning up... 00:29:13.347 [ 00:29:13.347 { 00:29:13.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:13.347 "subtype": "Discovery", 00:29:13.347 "listen_addresses": [], 00:29:13.347 "allow_any_host": true, 00:29:13.347 "hosts": [] 00:29:13.347 }, 00:29:13.347 { 00:29:13.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.347 "subtype": "NVMe", 00:29:13.347 "listen_addresses": [ 00:29:13.347 { 00:29:13.347 "trtype": "TCP", 00:29:13.347 "adrfam": "IPv4", 00:29:13.347 "traddr": "10.0.0.2", 00:29:13.347 "trsvcid": "4420" 00:29:13.347 } 00:29:13.347 ], 00:29:13.347 "allow_any_host": true, 00:29:13.347 "hosts": [], 00:29:13.347 "serial_number": "SPDK00000000000001", 00:29:13.347 "model_number": "SPDK bdev Controller", 00:29:13.347 "max_namespaces": 2, 00:29:13.347 "min_cntlid": 1, 00:29:13.347 "max_cntlid": 65519, 00:29:13.347 "namespaces": [ 00:29:13.347 { 00:29:13.347 "nsid": 1, 00:29:13.347 "bdev_name": "Malloc0", 00:29:13.347 "name": "Malloc0", 00:29:13.347 "nguid": "A367C4191BCE4507B81363B50DD6578F", 00:29:13.347 "uuid": "a367c419-1bce-4507-b813-63b50dd6578f" 00:29:13.347 }, 00:29:13.347 { 00:29:13.347 "nsid": 2, 00:29:13.347 "bdev_name": "Malloc1", 00:29:13.347 "name": "Malloc1", 00:29:13.347 "nguid": "BB9AE69723CA48C19E44FDEB633D0BA9", 00:29:13.348 "uuid": "bb9ae697-23ca-48c1-9e44-fdeb633d0ba9" 00:29:13.348 } 00:29:13.348 ] 00:29:13.348 } 00:29:13.348 ] 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 332224 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.348 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.348 rmmod nvme_tcp 00:29:13.348 rmmod nvme_fabrics 00:29:13.348 rmmod nvme_keyring 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 332195 ']' 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 332195 ']' 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332195' 00:29:13.606 killing process with pid 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 332195 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:13.606 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:29:13.865 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.865 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.865 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.865 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.865 13:40:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.769 00:29:15.769 real 0m5.561s 00:29:15.769 user 0m4.361s 00:29:15.769 sys 0m2.066s 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:15.769 ************************************ 00:29:15.769 END TEST nvmf_aer 00:29:15.769 ************************************ 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.769 ************************************ 00:29:15.769 START TEST nvmf_async_init 00:29:15.769 ************************************ 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:15.769 * Looking for test storage... 00:29:15.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:29:15.769 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.028 --rc genhtml_branch_coverage=1 00:29:16.028 --rc genhtml_function_coverage=1 00:29:16.028 --rc genhtml_legend=1 00:29:16.028 --rc geninfo_all_blocks=1 00:29:16.028 --rc geninfo_unexecuted_blocks=1 00:29:16.028 00:29:16.028 ' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.028 --rc genhtml_branch_coverage=1 00:29:16.028 --rc genhtml_function_coverage=1 00:29:16.028 --rc genhtml_legend=1 00:29:16.028 --rc geninfo_all_blocks=1 00:29:16.028 --rc geninfo_unexecuted_blocks=1 00:29:16.028 00:29:16.028 ' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.028 --rc genhtml_branch_coverage=1 00:29:16.028 --rc genhtml_function_coverage=1 00:29:16.028 --rc genhtml_legend=1 00:29:16.028 --rc geninfo_all_blocks=1 00:29:16.028 --rc geninfo_unexecuted_blocks=1 00:29:16.028 00:29:16.028 ' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.028 --rc genhtml_branch_coverage=1 00:29:16.028 --rc genhtml_function_coverage=1 00:29:16.028 --rc genhtml_legend=1 00:29:16.028 --rc geninfo_all_blocks=1 00:29:16.028 --rc geninfo_unexecuted_blocks=1 00:29:16.028 00:29:16.028 ' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.028 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fcf2c242b61f4236a793cb30cb6fa2c8 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.029 13:40:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.562 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.562 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.563 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.563 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.563 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:29:18.563 00:29:18.563 --- 10.0.0.2 ping statistics --- 00:29:18.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.563 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:29:18.563 00:29:18.563 --- 10.0.0.1 ping statistics --- 00:29:18.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.563 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=334283 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 334283 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 334283 ']' 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.563 13:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.563 [2024-10-14 13:40:10.006948] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:18.563 [2024-10-14 13:40:10.007032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.563 [2024-10-14 13:40:10.076447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.563 [2024-10-14 13:40:10.122149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.563 [2024-10-14 13:40:10.122199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.563 [2024-10-14 13:40:10.122223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.563 [2024-10-14 13:40:10.122234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.563 [2024-10-14 13:40:10.122243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.563 [2024-10-14 13:40:10.122829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.563 [2024-10-14 13:40:10.272270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.563 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 null0 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fcf2c242b61f4236a793cb30cb6fa2c8 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 [2024-10-14 13:40:10.312611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.564 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.824 nvme0n1 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.824 [ 00:29:18.824 { 00:29:18.824 "name": "nvme0n1", 00:29:18.824 "aliases": [ 00:29:18.824 "fcf2c242-b61f-4236-a793-cb30cb6fa2c8" 00:29:18.824 ], 00:29:18.824 "product_name": "NVMe disk", 00:29:18.824 "block_size": 512, 00:29:18.824 "num_blocks": 2097152, 00:29:18.824 "uuid": "fcf2c242-b61f-4236-a793-cb30cb6fa2c8", 00:29:18.824 "numa_id": 0, 00:29:18.824 "assigned_rate_limits": { 00:29:18.824 "rw_ios_per_sec": 0, 00:29:18.824 "rw_mbytes_per_sec": 0, 00:29:18.824 "r_mbytes_per_sec": 0, 00:29:18.824 "w_mbytes_per_sec": 0 00:29:18.824 }, 00:29:18.824 "claimed": false, 00:29:18.824 "zoned": false, 00:29:18.824 "supported_io_types": { 00:29:18.824 "read": true, 00:29:18.824 "write": true, 00:29:18.824 "unmap": false, 00:29:18.824 "flush": true, 00:29:18.824 "reset": true, 00:29:18.824 "nvme_admin": true, 00:29:18.824 "nvme_io": true, 00:29:18.824 "nvme_io_md": false, 00:29:18.824 "write_zeroes": true, 00:29:18.824 "zcopy": false, 00:29:18.824 "get_zone_info": false, 00:29:18.824 "zone_management": false, 00:29:18.824 "zone_append": false, 00:29:18.824 "compare": true, 00:29:18.824 "compare_and_write": true, 00:29:18.824 "abort": true, 00:29:18.824 "seek_hole": false, 00:29:18.824 "seek_data": false, 00:29:18.824 "copy": true, 00:29:18.824 "nvme_iov_md": false 00:29:18.824 }, 00:29:18.824 "memory_domains": [ 00:29:18.824 { 00:29:18.824 "dma_device_id": "system", 00:29:18.824 "dma_device_type": 1 00:29:18.824 } 00:29:18.824 ], 00:29:18.824 "driver_specific": { 00:29:18.824 "nvme": [ 00:29:18.824 { 00:29:18.824 "trid": { 00:29:18.824 "trtype": "TCP", 00:29:18.824 "adrfam": "IPv4", 00:29:18.824 "traddr": "10.0.0.2", 00:29:18.824 "trsvcid": "4420", 00:29:18.824 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:18.824 }, 00:29:18.824 "ctrlr_data": { 00:29:18.824 "cntlid": 1, 00:29:18.824 "vendor_id": "0x8086", 00:29:18.824 "model_number": "SPDK bdev Controller", 00:29:18.824 "serial_number": "00000000000000000000", 00:29:18.824 "firmware_revision": "25.01", 00:29:18.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.824 "oacs": { 00:29:18.824 "security": 0, 00:29:18.824 "format": 0, 00:29:18.824 "firmware": 0, 00:29:18.824 "ns_manage": 0 00:29:18.824 }, 00:29:18.824 "multi_ctrlr": true, 00:29:18.824 "ana_reporting": false 00:29:18.824 }, 00:29:18.824 "vs": { 00:29:18.824 "nvme_version": "1.3" 00:29:18.824 }, 00:29:18.824 "ns_data": { 00:29:18.824 "id": 1, 00:29:18.824 "can_share": true 00:29:18.824 } 00:29:18.824 } 00:29:18.824 ], 00:29:18.824 "mp_policy": "active_passive" 00:29:18.824 } 00:29:18.824 } 00:29:18.824 ] 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.824 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.824 [2024-10-14 13:40:10.561789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.824 [2024-10-14 13:40:10.561863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264a560 (9): Bad file descriptor 00:29:19.083 [2024-10-14 13:40:10.694272] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 [ 00:29:19.083 { 00:29:19.083 "name": "nvme0n1", 00:29:19.083 "aliases": [ 00:29:19.083 "fcf2c242-b61f-4236-a793-cb30cb6fa2c8" 00:29:19.083 ], 00:29:19.083 "product_name": "NVMe disk", 00:29:19.083 "block_size": 512, 00:29:19.083 "num_blocks": 2097152, 00:29:19.083 "uuid": "fcf2c242-b61f-4236-a793-cb30cb6fa2c8", 00:29:19.083 "numa_id": 0, 00:29:19.083 "assigned_rate_limits": { 00:29:19.083 "rw_ios_per_sec": 0, 00:29:19.083 "rw_mbytes_per_sec": 0, 00:29:19.083 "r_mbytes_per_sec": 0, 00:29:19.083 "w_mbytes_per_sec": 0 00:29:19.083 }, 00:29:19.083 "claimed": false, 00:29:19.083 "zoned": false, 00:29:19.083 "supported_io_types": { 00:29:19.083 "read": true, 00:29:19.083 "write": true, 00:29:19.083 "unmap": false, 00:29:19.083 "flush": true, 00:29:19.083 "reset": true, 00:29:19.083 "nvme_admin": true, 00:29:19.083 "nvme_io": true, 00:29:19.083 "nvme_io_md": false, 00:29:19.083 "write_zeroes": true, 00:29:19.083 "zcopy": false, 00:29:19.083 "get_zone_info": false, 00:29:19.083 "zone_management": false, 00:29:19.083 "zone_append": false, 00:29:19.083 "compare": true, 00:29:19.083 "compare_and_write": true, 00:29:19.083 "abort": true, 00:29:19.083 "seek_hole": false, 00:29:19.083 "seek_data": false, 00:29:19.083 "copy": true, 00:29:19.083 "nvme_iov_md": false 00:29:19.083 }, 00:29:19.083 "memory_domains": [ 00:29:19.083 { 00:29:19.083 "dma_device_id": "system", 00:29:19.083 "dma_device_type": 1 00:29:19.083 } 00:29:19.083 ], 00:29:19.083 "driver_specific": { 00:29:19.083 "nvme": [ 00:29:19.083 { 00:29:19.083 "trid": { 00:29:19.083 "trtype": "TCP", 00:29:19.083 "adrfam": "IPv4", 00:29:19.083 "traddr": "10.0.0.2", 00:29:19.083 "trsvcid": "4420", 00:29:19.083 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:19.083 }, 00:29:19.083 "ctrlr_data": { 00:29:19.083 "cntlid": 2, 00:29:19.083 "vendor_id": "0x8086", 00:29:19.083 "model_number": "SPDK bdev Controller", 00:29:19.083 "serial_number": "00000000000000000000", 00:29:19.083 "firmware_revision": "25.01", 00:29:19.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.083 "oacs": { 00:29:19.083 "security": 0, 00:29:19.083 "format": 0, 00:29:19.083 "firmware": 0, 00:29:19.083 "ns_manage": 0 00:29:19.083 }, 00:29:19.083 "multi_ctrlr": true, 00:29:19.083 "ana_reporting": false 00:29:19.083 }, 00:29:19.083 "vs": { 00:29:19.083 "nvme_version": "1.3" 00:29:19.083 }, 00:29:19.083 "ns_data": { 00:29:19.083 "id": 1, 00:29:19.083 "can_share": true 00:29:19.083 } 00:29:19.083 } 00:29:19.083 ], 00:29:19.083 "mp_policy": "active_passive" 00:29:19.083 } 00:29:19.083 } 00:29:19.083 ] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OPleC4qTLC 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OPleC4qTLC 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.OPleC4qTLC 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.083 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 [2024-10-14 13:40:10.750378] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:19.084 [2024-10-14 13:40:10.750514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 [2024-10-14 13:40:10.766433] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:19.084 nvme0n1 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 [ 00:29:19.084 { 00:29:19.084 "name": "nvme0n1", 00:29:19.084 "aliases": [ 00:29:19.084 "fcf2c242-b61f-4236-a793-cb30cb6fa2c8" 00:29:19.084 ], 00:29:19.084 "product_name": "NVMe disk", 00:29:19.084 "block_size": 512, 00:29:19.084 "num_blocks": 2097152, 00:29:19.084 "uuid": "fcf2c242-b61f-4236-a793-cb30cb6fa2c8", 00:29:19.084 "numa_id": 0, 00:29:19.084 "assigned_rate_limits": { 00:29:19.084 "rw_ios_per_sec": 0, 00:29:19.084 "rw_mbytes_per_sec": 0, 00:29:19.084 "r_mbytes_per_sec": 0, 00:29:19.084 "w_mbytes_per_sec": 0 00:29:19.084 }, 00:29:19.084 "claimed": false, 00:29:19.084 "zoned": false, 00:29:19.084 "supported_io_types": { 00:29:19.084 "read": true, 00:29:19.084 "write": true, 00:29:19.084 "unmap": false, 00:29:19.084 "flush": true, 00:29:19.084 "reset": true, 00:29:19.084 "nvme_admin": true, 00:29:19.084 "nvme_io": true, 00:29:19.084 "nvme_io_md": false, 00:29:19.084 "write_zeroes": true, 00:29:19.084 "zcopy": false, 00:29:19.084 "get_zone_info": false, 00:29:19.084 "zone_management": false, 00:29:19.084 "zone_append": false, 00:29:19.084 "compare": true, 00:29:19.084 "compare_and_write": true, 00:29:19.084 "abort": true, 00:29:19.084 "seek_hole": false, 00:29:19.084 "seek_data": false, 00:29:19.084 "copy": true, 00:29:19.084 "nvme_iov_md": false 00:29:19.084 }, 00:29:19.084 "memory_domains": [ 00:29:19.084 { 00:29:19.084 "dma_device_id": "system", 00:29:19.084 "dma_device_type": 1 00:29:19.084 } 00:29:19.084 ], 00:29:19.084 "driver_specific": { 00:29:19.084 "nvme": [ 00:29:19.084 { 00:29:19.084 "trid": { 00:29:19.084 "trtype": "TCP", 00:29:19.084 "adrfam": "IPv4", 00:29:19.084 "traddr": "10.0.0.2", 00:29:19.084 "trsvcid": "4421", 00:29:19.084 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:19.084 }, 00:29:19.084 "ctrlr_data": { 00:29:19.084 "cntlid": 3, 00:29:19.084 "vendor_id": "0x8086", 00:29:19.084 "model_number": "SPDK bdev Controller", 00:29:19.084 "serial_number": "00000000000000000000", 00:29:19.084 "firmware_revision": "25.01", 00:29:19.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.084 "oacs": { 00:29:19.084 "security": 0, 00:29:19.084 "format": 0, 00:29:19.084 "firmware": 0, 00:29:19.084 "ns_manage": 0 00:29:19.084 }, 00:29:19.084 "multi_ctrlr": true, 00:29:19.084 "ana_reporting": false 00:29:19.084 }, 00:29:19.084 "vs": { 00:29:19.084 "nvme_version": "1.3" 00:29:19.084 }, 00:29:19.084 "ns_data": { 00:29:19.084 "id": 1, 00:29:19.084 "can_share": true 00:29:19.084 } 00:29:19.084 } 00:29:19.084 ], 00:29:19.084 "mp_policy": "active_passive" 00:29:19.084 } 00:29:19.084 } 00:29:19.084 ] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.OPleC4qTLC 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.084 rmmod nvme_tcp 00:29:19.084 rmmod nvme_fabrics 00:29:19.084 rmmod nvme_keyring 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 334283 ']' 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 334283 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 334283 ']' 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 334283 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.084 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334283 00:29:19.343 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.344 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.344 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334283' 00:29:19.344 killing process with pid 334283 00:29:19.344 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 334283 00:29:19.344 13:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 334283 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.344 13:40:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.884 00:29:21.884 real 0m5.619s 00:29:21.884 user 0m2.111s 00:29:21.884 sys 0m1.932s 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:21.884 ************************************ 00:29:21.884 END TEST nvmf_async_init 00:29:21.884 ************************************ 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.884 ************************************ 00:29:21.884 START TEST dma 00:29:21.884 ************************************ 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:21.884 * Looking for test storage... 00:29:21.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.884 --rc genhtml_branch_coverage=1 00:29:21.884 --rc genhtml_function_coverage=1 00:29:21.884 --rc genhtml_legend=1 00:29:21.884 --rc geninfo_all_blocks=1 00:29:21.884 --rc geninfo_unexecuted_blocks=1 00:29:21.884 00:29:21.884 ' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.884 --rc genhtml_branch_coverage=1 00:29:21.884 --rc genhtml_function_coverage=1 00:29:21.884 --rc genhtml_legend=1 00:29:21.884 --rc geninfo_all_blocks=1 00:29:21.884 --rc geninfo_unexecuted_blocks=1 00:29:21.884 00:29:21.884 ' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.884 --rc genhtml_branch_coverage=1 00:29:21.884 --rc genhtml_function_coverage=1 00:29:21.884 --rc genhtml_legend=1 00:29:21.884 --rc geninfo_all_blocks=1 00:29:21.884 --rc geninfo_unexecuted_blocks=1 00:29:21.884 00:29:21.884 ' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.884 --rc genhtml_branch_coverage=1 00:29:21.884 --rc genhtml_function_coverage=1 00:29:21.884 --rc genhtml_legend=1 00:29:21.884 --rc geninfo_all_blocks=1 00:29:21.884 --rc geninfo_unexecuted_blocks=1 00:29:21.884 00:29:21.884 ' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.884 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:21.885 00:29:21.885 real 0m0.173s 00:29:21.885 user 0m0.111s 00:29:21.885 sys 0m0.071s 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 END TEST dma 00:29:21.885 ************************************ 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.885 ************************************ 00:29:21.885 START TEST nvmf_identify 00:29:21.885 ************************************ 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:21.885 * Looking for test storage... 00:29:21.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.885 --rc genhtml_branch_coverage=1 00:29:21.885 --rc genhtml_function_coverage=1 00:29:21.885 --rc genhtml_legend=1 00:29:21.885 --rc geninfo_all_blocks=1 00:29:21.885 --rc geninfo_unexecuted_blocks=1 00:29:21.885 00:29:21.885 ' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.885 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.886 13:40:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.419 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:24.420 00:29:24.420 --- 10.0.0.2 ping statistics --- 00:29:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.420 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:29:24.420 00:29:24.420 --- 10.0.0.1 ping statistics --- 00:29:24.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.420 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=336429 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 336429 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 336429 ']' 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.420 13:40:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 [2024-10-14 13:40:15.902380] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:24.420 [2024-10-14 13:40:15.902477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.420 [2024-10-14 13:40:15.970328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.420 [2024-10-14 13:40:16.019400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.420 [2024-10-14 13:40:16.019468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.420 [2024-10-14 13:40:16.019492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.420 [2024-10-14 13:40:16.019504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.420 [2024-10-14 13:40:16.019513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.420 [2024-10-14 13:40:16.022148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.420 [2024-10-14 13:40:16.022215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.420 [2024-10-14 13:40:16.022282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.420 [2024-10-14 13:40:16.022285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 [2024-10-14 13:40:16.149845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 Malloc0 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 [2024-10-14 13:40:16.228185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.420 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.420 [ 00:29:24.420 { 00:29:24.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.420 "subtype": "Discovery", 00:29:24.420 "listen_addresses": [ 00:29:24.420 { 00:29:24.420 "trtype": "TCP", 00:29:24.420 "adrfam": "IPv4", 00:29:24.420 "traddr": "10.0.0.2", 00:29:24.420 "trsvcid": "4420" 00:29:24.420 } 00:29:24.420 ], 00:29:24.420 "allow_any_host": true, 00:29:24.420 "hosts": [] 00:29:24.420 }, 00:29:24.420 { 00:29:24.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.420 "subtype": "NVMe", 00:29:24.421 "listen_addresses": [ 00:29:24.421 { 00:29:24.421 "trtype": "TCP", 00:29:24.421 "adrfam": "IPv4", 00:29:24.421 "traddr": "10.0.0.2", 00:29:24.421 "trsvcid": "4420" 00:29:24.421 } 00:29:24.421 ], 00:29:24.421 "allow_any_host": true, 00:29:24.421 "hosts": [], 00:29:24.421 "serial_number": "SPDK00000000000001", 00:29:24.421 "model_number": "SPDK bdev Controller", 00:29:24.421 "max_namespaces": 32, 00:29:24.421 "min_cntlid": 1, 00:29:24.421 "max_cntlid": 65519, 00:29:24.421 "namespaces": [ 00:29:24.421 { 00:29:24.421 "nsid": 1, 00:29:24.421 "bdev_name": "Malloc0", 00:29:24.421 "name": "Malloc0", 00:29:24.421 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:24.421 "eui64": "ABCDEF0123456789", 00:29:24.421 "uuid": "8b65f60e-9392-4b7b-adba-c22d88ab44e0" 00:29:24.421 } 00:29:24.421 ] 00:29:24.421 } 00:29:24.421 ] 00:29:24.421 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.421 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:24.421 [2024-10-14 13:40:16.266756] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:24.421 [2024-10-14 13:40:16.266792] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336462 ] 00:29:24.682 [2024-10-14 13:40:16.301527] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:24.682 [2024-10-14 13:40:16.301598] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.682 [2024-10-14 13:40:16.301608] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.682 [2024-10-14 13:40:16.301625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.682 [2024-10-14 13:40:16.301640] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.682 [2024-10-14 13:40:16.302338] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:24.682 [2024-10-14 13:40:16.302395] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e1a210 0 00:29:24.682 [2024-10-14 13:40:16.312142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.682 [2024-10-14 13:40:16.312166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.682 [2024-10-14 13:40:16.312176] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.682 [2024-10-14 13:40:16.312182] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.682 [2024-10-14 13:40:16.312243] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.312258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.312266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.312292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.682 [2024-10-14 13:40:16.312320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.323145] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.323164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.323171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323179] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.323201] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.682 [2024-10-14 13:40:16.323213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:24.682 [2024-10-14 13:40:16.323224] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:24.682 [2024-10-14 13:40:16.323249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323265] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.323276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.323300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.323483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.323497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.323504] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.323521] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:24.682 [2024-10-14 13:40:16.323534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:24.682 [2024-10-14 13:40:16.323547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323555] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.323572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.323593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.323735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.323747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.323753] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.323770] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:24.682 [2024-10-14 13:40:16.323784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.323796] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323810] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.323825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.323848] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.323921] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.323933] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.323940] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.323956] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.323972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.323987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.323998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.324019] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.324095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.324109] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.324116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324123] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.324140] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:24.682 [2024-10-14 13:40:16.324151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.324164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.324274] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:24.682 [2024-10-14 13:40:16.324283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.324298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.324323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.324344] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.324470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.324482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.324489] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.324505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.682 [2024-10-14 13:40:16.324520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.682 [2024-10-14 13:40:16.324551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.682 [2024-10-14 13:40:16.324572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.682 [2024-10-14 13:40:16.324649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.682 [2024-10-14 13:40:16.324663] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.682 [2024-10-14 13:40:16.324670] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.682 [2024-10-14 13:40:16.324684] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.682 [2024-10-14 13:40:16.324692] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:24.682 [2024-10-14 13:40:16.324706] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:24.682 [2024-10-14 13:40:16.324720] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.682 [2024-10-14 13:40:16.324739] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.682 [2024-10-14 13:40:16.324747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.324758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-10-14 13:40:16.324779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.683 [2024-10-14 13:40:16.324896] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.683 [2024-10-14 13:40:16.324910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.683 [2024-10-14 13:40:16.324917] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.324924] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1a210): datao=0, datal=4096, cccid=0 00:29:24.683 [2024-10-14 13:40:16.324932] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84440) on tqpair(0x1e1a210): expected_datao=0, payload_size=4096 00:29:24.683 [2024-10-14 13:40:16.324940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.324958] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.324968] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365281] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.365300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.365308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.365328] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:24.683 [2024-10-14 13:40:16.365338] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:24.683 [2024-10-14 13:40:16.365346] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:24.683 [2024-10-14 13:40:16.365355] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:24.683 [2024-10-14 13:40:16.365362] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:24.683 [2024-10-14 13:40:16.365371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:24.683 [2024-10-14 13:40:16.365391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.683 [2024-10-14 13:40:16.365405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365420] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.683 [2024-10-14 13:40:16.365456] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.683 [2024-10-14 13:40:16.365585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.365597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.365604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365611] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.365625] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.683 [2024-10-14 13:40:16.365659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365666] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.683 [2024-10-14 13:40:16.365691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.683 [2024-10-14 13:40:16.365723] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.683 [2024-10-14 13:40:16.365753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.683 [2024-10-14 13:40:16.365774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.683 [2024-10-14 13:40:16.365787] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.365795] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.365805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-10-14 13:40:16.365842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84440, cid 0, qid 0 00:29:24.683 [2024-10-14 13:40:16.365858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e845c0, cid 1, qid 0 00:29:24.683 [2024-10-14 13:40:16.365866] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84740, cid 2, qid 0 00:29:24.683 [2024-10-14 13:40:16.365873] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848c0, cid 3, qid 0 00:29:24.683 [2024-10-14 13:40:16.365881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a40, cid 4, qid 0 00:29:24.683 [2024-10-14 13:40:16.366067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.366082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.366089] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84a40) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.366106] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:24.683 [2024-10-14 13:40:16.366115] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:24.683 [2024-10-14 13:40:16.366140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.366162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-10-14 13:40:16.366185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a40, cid 4, qid 0 00:29:24.683 [2024-10-14 13:40:16.366327] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.683 [2024-10-14 13:40:16.366339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.683 [2024-10-14 13:40:16.366346] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1a210): datao=0, datal=4096, cccid=4 00:29:24.683 [2024-10-14 13:40:16.366360] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84a40) on tqpair(0x1e1a210): expected_datao=0, payload_size=4096 00:29:24.683 [2024-10-14 13:40:16.366368] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366378] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366385] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.366406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.366413] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366420] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84a40) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.366440] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:24.683 [2024-10-14 13:40:16.366488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366500] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.366511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-10-14 13:40:16.366522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366530] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366536] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.366545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.683 [2024-10-14 13:40:16.366567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a40, cid 4, qid 0 00:29:24.683 [2024-10-14 13:40:16.366582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84bc0, cid 5, qid 0 00:29:24.683 [2024-10-14 13:40:16.366706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.683 [2024-10-14 13:40:16.366718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.683 [2024-10-14 13:40:16.366725] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1a210): datao=0, datal=1024, cccid=4 00:29:24.683 [2024-10-14 13:40:16.366739] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84a40) on tqpair(0x1e1a210): expected_datao=0, payload_size=1024 00:29:24.683 [2024-10-14 13:40:16.366746] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366756] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366763] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.366780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.366787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.366793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84bc0) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.411154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.683 [2024-10-14 13:40:16.411172] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.683 [2024-10-14 13:40:16.411180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.411187] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84a40) on tqpair=0x1e1a210 00:29:24.683 [2024-10-14 13:40:16.411205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.683 [2024-10-14 13:40:16.411214] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1a210) 00:29:24.683 [2024-10-14 13:40:16.411225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-10-14 13:40:16.411271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a40, cid 4, qid 0 00:29:24.684 [2024-10-14 13:40:16.411404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.684 [2024-10-14 13:40:16.411418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.684 [2024-10-14 13:40:16.411425] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.411432] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1a210): datao=0, datal=3072, cccid=4 00:29:24.684 [2024-10-14 13:40:16.411439] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84a40) on tqpair(0x1e1a210): expected_datao=0, payload_size=3072 00:29:24.684 [2024-10-14 13:40:16.411447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.411467] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.411476] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.684 [2024-10-14 13:40:16.452247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.684 [2024-10-14 13:40:16.452254] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84a40) on tqpair=0x1e1a210 00:29:24.684 [2024-10-14 13:40:16.452277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e1a210) 00:29:24.684 [2024-10-14 13:40:16.452297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-10-14 13:40:16.452332] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e84a40, cid 4, qid 0 00:29:24.684 [2024-10-14 13:40:16.452427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.684 [2024-10-14 13:40:16.452439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.684 [2024-10-14 13:40:16.452447] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e1a210): datao=0, datal=8, cccid=4 00:29:24.684 [2024-10-14 13:40:16.452461] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e84a40) on tqpair(0x1e1a210): expected_datao=0, payload_size=8 00:29:24.684 [2024-10-14 13:40:16.452468] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452478] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.452485] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.493229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.684 [2024-10-14 13:40:16.493249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.684 [2024-10-14 13:40:16.493257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.684 [2024-10-14 13:40:16.493265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84a40) on tqpair=0x1e1a210 00:29:24.684 ===================================================== 00:29:24.684 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:24.684 ===================================================== 00:29:24.684 Controller Capabilities/Features 00:29:24.684 ================================ 00:29:24.684 Vendor ID: 0000 00:29:24.684 Subsystem Vendor ID: 0000 00:29:24.684 Serial Number: .................... 00:29:24.684 Model Number: ........................................ 00:29:24.684 Firmware Version: 25.01 00:29:24.684 Recommended Arb Burst: 0 00:29:24.684 IEEE OUI Identifier: 00 00 00 00:29:24.684 Multi-path I/O 00:29:24.684 May have multiple subsystem ports: No 00:29:24.684 May have multiple controllers: No 00:29:24.684 Associated with SR-IOV VF: No 00:29:24.684 Max Data Transfer Size: 131072 00:29:24.684 Max Number of Namespaces: 0 00:29:24.684 Max Number of I/O Queues: 1024 00:29:24.684 NVMe Specification Version (VS): 1.3 00:29:24.684 NVMe Specification Version (Identify): 1.3 00:29:24.684 Maximum Queue Entries: 128 00:29:24.684 Contiguous Queues Required: Yes 00:29:24.684 Arbitration Mechanisms Supported 00:29:24.684 Weighted Round Robin: Not Supported 00:29:24.684 Vendor Specific: Not Supported 00:29:24.684 Reset Timeout: 15000 ms 00:29:24.684 Doorbell Stride: 4 bytes 00:29:24.684 NVM Subsystem Reset: Not Supported 00:29:24.684 Command Sets Supported 00:29:24.684 NVM Command Set: Supported 00:29:24.684 Boot Partition: Not Supported 00:29:24.684 Memory Page Size Minimum: 4096 bytes 00:29:24.684 Memory Page Size Maximum: 4096 bytes 00:29:24.684 Persistent Memory Region: Not Supported 00:29:24.684 Optional Asynchronous Events Supported 00:29:24.684 Namespace Attribute Notices: Not Supported 00:29:24.684 Firmware Activation Notices: Not Supported 00:29:24.684 ANA Change Notices: Not Supported 00:29:24.684 PLE Aggregate Log Change Notices: Not Supported 00:29:24.684 LBA Status Info Alert Notices: Not Supported 00:29:24.684 EGE Aggregate Log Change Notices: Not Supported 00:29:24.684 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.684 Zone Descriptor Change Notices: Not Supported 00:29:24.684 Discovery Log Change Notices: Supported 00:29:24.684 Controller Attributes 00:29:24.684 128-bit Host Identifier: Not Supported 00:29:24.684 Non-Operational Permissive Mode: Not Supported 00:29:24.684 NVM Sets: Not Supported 00:29:24.684 Read Recovery Levels: Not Supported 00:29:24.684 Endurance Groups: Not Supported 00:29:24.684 Predictable Latency Mode: Not Supported 00:29:24.684 Traffic Based Keep ALive: Not Supported 00:29:24.684 Namespace Granularity: Not Supported 00:29:24.684 SQ Associations: Not Supported 00:29:24.684 UUID List: Not Supported 00:29:24.684 Multi-Domain Subsystem: Not Supported 00:29:24.684 Fixed Capacity Management: Not Supported 00:29:24.684 Variable Capacity Management: Not Supported 00:29:24.684 Delete Endurance Group: Not Supported 00:29:24.684 Delete NVM Set: Not Supported 00:29:24.684 Extended LBA Formats Supported: Not Supported 00:29:24.684 Flexible Data Placement Supported: Not Supported 00:29:24.684 00:29:24.684 Controller Memory Buffer Support 00:29:24.684 ================================ 00:29:24.684 Supported: No 00:29:24.684 00:29:24.684 Persistent Memory Region Support 00:29:24.684 ================================ 00:29:24.684 Supported: No 00:29:24.684 00:29:24.684 Admin Command Set Attributes 00:29:24.684 ============================ 00:29:24.684 Security Send/Receive: Not Supported 00:29:24.684 Format NVM: Not Supported 00:29:24.684 Firmware Activate/Download: Not Supported 00:29:24.684 Namespace Management: Not Supported 00:29:24.684 Device Self-Test: Not Supported 00:29:24.684 Directives: Not Supported 00:29:24.684 NVMe-MI: Not Supported 00:29:24.684 Virtualization Management: Not Supported 00:29:24.684 Doorbell Buffer Config: Not Supported 00:29:24.684 Get LBA Status Capability: Not Supported 00:29:24.684 Command & Feature Lockdown Capability: Not Supported 00:29:24.684 Abort Command Limit: 1 00:29:24.684 Async Event Request Limit: 4 00:29:24.684 Number of Firmware Slots: N/A 00:29:24.684 Firmware Slot 1 Read-Only: N/A 00:29:24.684 Firmware Activation Without Reset: N/A 00:29:24.684 Multiple Update Detection Support: N/A 00:29:24.684 Firmware Update Granularity: No Information Provided 00:29:24.684 Per-Namespace SMART Log: No 00:29:24.684 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.684 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:24.684 Command Effects Log Page: Not Supported 00:29:24.684 Get Log Page Extended Data: Supported 00:29:24.684 Telemetry Log Pages: Not Supported 00:29:24.684 Persistent Event Log Pages: Not Supported 00:29:24.684 Supported Log Pages Log Page: May Support 00:29:24.684 Commands Supported & Effects Log Page: Not Supported 00:29:24.684 Feature Identifiers & Effects Log Page:May Support 00:29:24.684 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.684 Data Area 4 for Telemetry Log: Not Supported 00:29:24.684 Error Log Page Entries Supported: 128 00:29:24.684 Keep Alive: Not Supported 00:29:24.684 00:29:24.684 NVM Command Set Attributes 00:29:24.684 ========================== 00:29:24.684 Submission Queue Entry Size 00:29:24.684 Max: 1 00:29:24.684 Min: 1 00:29:24.684 Completion Queue Entry Size 00:29:24.684 Max: 1 00:29:24.684 Min: 1 00:29:24.684 Number of Namespaces: 0 00:29:24.684 Compare Command: Not Supported 00:29:24.684 Write Uncorrectable Command: Not Supported 00:29:24.684 Dataset Management Command: Not Supported 00:29:24.684 Write Zeroes Command: Not Supported 00:29:24.684 Set Features Save Field: Not Supported 00:29:24.684 Reservations: Not Supported 00:29:24.684 Timestamp: Not Supported 00:29:24.684 Copy: Not Supported 00:29:24.684 Volatile Write Cache: Not Present 00:29:24.684 Atomic Write Unit (Normal): 1 00:29:24.684 Atomic Write Unit (PFail): 1 00:29:24.684 Atomic Compare & Write Unit: 1 00:29:24.684 Fused Compare & Write: Supported 00:29:24.684 Scatter-Gather List 00:29:24.684 SGL Command Set: Supported 00:29:24.684 SGL Keyed: Supported 00:29:24.684 SGL Bit Bucket Descriptor: Not Supported 00:29:24.684 SGL Metadata Pointer: Not Supported 00:29:24.684 Oversized SGL: Not Supported 00:29:24.684 SGL Metadata Address: Not Supported 00:29:24.684 SGL Offset: Supported 00:29:24.684 Transport SGL Data Block: Not Supported 00:29:24.684 Replay Protected Memory Block: Not Supported 00:29:24.684 00:29:24.684 Firmware Slot Information 00:29:24.684 ========================= 00:29:24.684 Active slot: 0 00:29:24.684 00:29:24.684 00:29:24.684 Error Log 00:29:24.684 ========= 00:29:24.684 00:29:24.684 Active Namespaces 00:29:24.684 ================= 00:29:24.684 Discovery Log Page 00:29:24.684 ================== 00:29:24.684 Generation Counter: 2 00:29:24.684 Number of Records: 2 00:29:24.684 Record Format: 0 00:29:24.684 00:29:24.684 Discovery Log Entry 0 00:29:24.684 ---------------------- 00:29:24.684 Transport Type: 3 (TCP) 00:29:24.684 Address Family: 1 (IPv4) 00:29:24.685 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:24.685 Entry Flags: 00:29:24.685 Duplicate Returned Information: 1 00:29:24.685 Explicit Persistent Connection Support for Discovery: 1 00:29:24.685 Transport Requirements: 00:29:24.685 Secure Channel: Not Required 00:29:24.685 Port ID: 0 (0x0000) 00:29:24.685 Controller ID: 65535 (0xffff) 00:29:24.685 Admin Max SQ Size: 128 00:29:24.685 Transport Service Identifier: 4420 00:29:24.685 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:24.685 Transport Address: 10.0.0.2 00:29:24.685 Discovery Log Entry 1 00:29:24.685 ---------------------- 00:29:24.685 Transport Type: 3 (TCP) 00:29:24.685 Address Family: 1 (IPv4) 00:29:24.685 Subsystem Type: 2 (NVM Subsystem) 00:29:24.685 Entry Flags: 00:29:24.685 Duplicate Returned Information: 0 00:29:24.685 Explicit Persistent Connection Support for Discovery: 0 00:29:24.685 Transport Requirements: 00:29:24.685 Secure Channel: Not Required 00:29:24.685 Port ID: 0 (0x0000) 00:29:24.685 Controller ID: 65535 (0xffff) 00:29:24.685 Admin Max SQ Size: 128 00:29:24.685 Transport Service Identifier: 4420 00:29:24.685 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:24.685 Transport Address: 10.0.0.2 [2024-10-14 13:40:16.493392] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:24.685 [2024-10-14 13:40:16.493416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84440) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-10-14 13:40:16.493440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e845c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-10-14 13:40:16.493456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e84740) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-10-14 13:40:16.493472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e848c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-10-14 13:40:16.493494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493509] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1a210) 00:29:24.685 [2024-10-14 13:40:16.493521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-10-14 13:40:16.493548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848c0, cid 3, qid 0 00:29:24.685 [2024-10-14 13:40:16.493679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.685 [2024-10-14 13:40:16.493693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.685 [2024-10-14 13:40:16.493700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493707] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e848c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493721] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493728] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493735] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1a210) 00:29:24.685 [2024-10-14 13:40:16.493746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-10-14 13:40:16.493778] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848c0, cid 3, qid 0 00:29:24.685 [2024-10-14 13:40:16.493871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.685 [2024-10-14 13:40:16.493884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.685 [2024-10-14 13:40:16.493890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e848c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.493907] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:24.685 [2024-10-14 13:40:16.493923] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:24.685 [2024-10-14 13:40:16.493941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493950] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.493957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1a210) 00:29:24.685 [2024-10-14 13:40:16.493967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-10-14 13:40:16.493989] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848c0, cid 3, qid 0 00:29:24.685 [2024-10-14 13:40:16.494080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.685 [2024-10-14 13:40:16.494092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.685 [2024-10-14 13:40:16.494099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.494106] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e848c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.494123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.498144] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.498153] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e1a210) 00:29:24.685 [2024-10-14 13:40:16.498164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-10-14 13:40:16.498187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e848c0, cid 3, qid 0 00:29:24.685 [2024-10-14 13:40:16.498308] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.685 [2024-10-14 13:40:16.498322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.685 [2024-10-14 13:40:16.498329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.685 [2024-10-14 13:40:16.498336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e848c0) on tqpair=0x1e1a210 00:29:24.685 [2024-10-14 13:40:16.498351] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:29:24.685 00:29:24.685 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:24.948 [2024-10-14 13:40:16.534402] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:24.948 [2024-10-14 13:40:16.534457] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336465 ] 00:29:24.948 [2024-10-14 13:40:16.568993] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:24.948 [2024-10-14 13:40:16.569046] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.948 [2024-10-14 13:40:16.569056] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.948 [2024-10-14 13:40:16.569073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.948 [2024-10-14 13:40:16.569087] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.948 [2024-10-14 13:40:16.569534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:24.948 [2024-10-14 13:40:16.569591] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18e5210 0 00:29:24.948 [2024-10-14 13:40:16.580142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.948 [2024-10-14 13:40:16.580162] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.948 [2024-10-14 13:40:16.580170] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.948 [2024-10-14 13:40:16.580176] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.948 [2024-10-14 13:40:16.580222] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.580234] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.580241] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.580256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.948 [2024-10-14 13:40:16.580283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.588142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.588161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.588168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.588194] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.948 [2024-10-14 13:40:16.588205] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:24.948 [2024-10-14 13:40:16.588215] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:24.948 [2024-10-14 13:40:16.588232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.588259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.588283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.588376] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.588389] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.588396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.588410] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:24.948 [2024-10-14 13:40:16.588423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:24.948 [2024-10-14 13:40:16.588435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588443] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.588464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.588487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.588570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.588584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.588591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.588605] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:24.948 [2024-10-14 13:40:16.588620] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.588632] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.588656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.588678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.588772] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.588786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.588793] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.588808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.588832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588842] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588849] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.588859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.588881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.588959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.588973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.588980] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.588987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.588994] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:24.948 [2024-10-14 13:40:16.589002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.589015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.589125] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:24.948 [2024-10-14 13:40:16.589142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.589154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.589166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.589173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.589183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.589206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.589298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.948 [2024-10-14 13:40:16.589312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.948 [2024-10-14 13:40:16.589319] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.589325] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.948 [2024-10-14 13:40:16.589334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.948 [2024-10-14 13:40:16.589350] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.589359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.948 [2024-10-14 13:40:16.589365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.948 [2024-10-14 13:40:16.589376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.948 [2024-10-14 13:40:16.589398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.948 [2024-10-14 13:40:16.589474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.949 [2024-10-14 13:40:16.589487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.949 [2024-10-14 13:40:16.589493] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.949 [2024-10-14 13:40:16.589507] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.949 [2024-10-14 13:40:16.589515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.589528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:24.949 [2024-10-14 13:40:16.589542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.589556] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.589574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.949 [2024-10-14 13:40:16.589596] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.949 [2024-10-14 13:40:16.589711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.949 [2024-10-14 13:40:16.589724] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.949 [2024-10-14 13:40:16.589731] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589737] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=4096, cccid=0 00:29:24.949 [2024-10-14 13:40:16.589745] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194f440) on tqpair(0x18e5210): expected_datao=0, payload_size=4096 00:29:24.949 [2024-10-14 13:40:16.589752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589769] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589778] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589793] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.949 [2024-10-14 13:40:16.589804] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.949 [2024-10-14 13:40:16.589811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.949 [2024-10-14 13:40:16.589828] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:24.949 [2024-10-14 13:40:16.589836] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:24.949 [2024-10-14 13:40:16.589844] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:24.949 [2024-10-14 13:40:16.589850] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:24.949 [2024-10-14 13:40:16.589857] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:24.949 [2024-10-14 13:40:16.589865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.589879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.589891] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.589905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.589916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.949 [2024-10-14 13:40:16.589938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.949 [2024-10-14 13:40:16.590032] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.949 [2024-10-14 13:40:16.590046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.949 [2024-10-14 13:40:16.590052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590059] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.949 [2024-10-14 13:40:16.590069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590083] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.949 [2024-10-14 13:40:16.590102] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590109] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.949 [2024-10-14 13:40:16.590143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.949 [2024-10-14 13:40:16.590175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.949 [2024-10-14 13:40:16.590210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.949 [2024-10-14 13:40:16.590282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f440, cid 0, qid 0 00:29:24.949 [2024-10-14 13:40:16.590293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f5c0, cid 1, qid 0 00:29:24.949 [2024-10-14 13:40:16.590301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f740, cid 2, qid 0 00:29:24.949 [2024-10-14 13:40:16.590308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.949 [2024-10-14 13:40:16.590315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.949 [2024-10-14 13:40:16.590420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.949 [2024-10-14 13:40:16.590432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.949 [2024-10-14 13:40:16.590439] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590446] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.949 [2024-10-14 13:40:16.590454] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:24.949 [2024-10-14 13:40:16.590462] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590518] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.949 [2024-10-14 13:40:16.590556] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.949 [2024-10-14 13:40:16.590669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.949 [2024-10-14 13:40:16.590683] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.949 [2024-10-14 13:40:16.590690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.949 [2024-10-14 13:40:16.590767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:24.949 [2024-10-14 13:40:16.590802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590813] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.949 [2024-10-14 13:40:16.590824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.949 [2024-10-14 13:40:16.590846] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.949 [2024-10-14 13:40:16.590963] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.949 [2024-10-14 13:40:16.590979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.949 [2024-10-14 13:40:16.590985] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.949 [2024-10-14 13:40:16.590992] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=4096, cccid=4 00:29:24.949 [2024-10-14 13:40:16.590999] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fa40) on tqpair(0x18e5210): expected_datao=0, payload_size=4096 00:29:24.949 [2024-10-14 13:40:16.591006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.591023] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.591032] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631201] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.631221] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.631229] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.631254] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:24.950 [2024-10-14 13:40:16.631272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.631291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.631305] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.631324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.631348] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.950 [2024-10-14 13:40:16.631461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.950 [2024-10-14 13:40:16.631480] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.950 [2024-10-14 13:40:16.631486] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631493] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=4096, cccid=4 00:29:24.950 [2024-10-14 13:40:16.631500] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fa40) on tqpair(0x18e5210): expected_datao=0, payload_size=4096 00:29:24.950 [2024-10-14 13:40:16.631508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631524] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.631533] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.675177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.675185] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.675217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675265] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.675276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.675301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.950 [2024-10-14 13:40:16.675405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.950 [2024-10-14 13:40:16.675420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.950 [2024-10-14 13:40:16.675427] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675433] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=4096, cccid=4 00:29:24.950 [2024-10-14 13:40:16.675441] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fa40) on tqpair(0x18e5210): expected_datao=0, payload_size=4096 00:29:24.950 [2024-10-14 13:40:16.675448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675459] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675466] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675478] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.675487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.675494] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.675515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675529] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675584] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:24.950 [2024-10-14 13:40:16.675591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:24.950 [2024-10-14 13:40:16.675600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:24.950 [2024-10-14 13:40:16.675619] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675628] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.675639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.675650] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.675677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.950 [2024-10-14 13:40:16.675700] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.950 [2024-10-14 13:40:16.675711] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fbc0, cid 5, qid 0 00:29:24.950 [2024-10-14 13:40:16.675831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.675844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.675851] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675858] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.675868] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.675878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.675884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fbc0) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.675907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.675916] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.675926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.675947] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fbc0, cid 5, qid 0 00:29:24.950 [2024-10-14 13:40:16.676027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.676041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.676048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.676054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fbc0) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.676070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.676079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.676089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.676109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fbc0, cid 5, qid 0 00:29:24.950 [2024-10-14 13:40:16.676209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.676223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.676230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.676237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fbc0) on tqpair=0x18e5210 00:29:24.950 [2024-10-14 13:40:16.676253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.950 [2024-10-14 13:40:16.676261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e5210) 00:29:24.950 [2024-10-14 13:40:16.676272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.950 [2024-10-14 13:40:16.676293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fbc0, cid 5, qid 0 00:29:24.950 [2024-10-14 13:40:16.676375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.950 [2024-10-14 13:40:16.676387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.950 [2024-10-14 13:40:16.676394] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fbc0) on tqpair=0x18e5210 00:29:24.951 [2024-10-14 13:40:16.676427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e5210) 00:29:24.951 [2024-10-14 13:40:16.676449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.951 [2024-10-14 13:40:16.676462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e5210) 00:29:24.951 [2024-10-14 13:40:16.676479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.951 [2024-10-14 13:40:16.676490] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18e5210) 00:29:24.951 [2024-10-14 13:40:16.676506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.951 [2024-10-14 13:40:16.676522] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18e5210) 00:29:24.951 [2024-10-14 13:40:16.676540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.951 [2024-10-14 13:40:16.676562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fbc0, cid 5, qid 0 00:29:24.951 [2024-10-14 13:40:16.676573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fa40, cid 4, qid 0 00:29:24.951 [2024-10-14 13:40:16.676581] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fd40, cid 6, qid 0 00:29:24.951 [2024-10-14 13:40:16.676588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fec0, cid 7, qid 0 00:29:24.951 [2024-10-14 13:40:16.676783] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.951 [2024-10-14 13:40:16.676797] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.951 [2024-10-14 13:40:16.676804] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676810] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=8192, cccid=5 00:29:24.951 [2024-10-14 13:40:16.676819] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fbc0) on tqpair(0x18e5210): expected_datao=0, payload_size=8192 00:29:24.951 [2024-10-14 13:40:16.676826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676847] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676856] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.951 [2024-10-14 13:40:16.676874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.951 [2024-10-14 13:40:16.676880] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676886] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=512, cccid=4 00:29:24.951 [2024-10-14 13:40:16.676894] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fa40) on tqpair(0x18e5210): expected_datao=0, payload_size=512 00:29:24.951 [2024-10-14 13:40:16.676901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676910] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676916] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.951 [2024-10-14 13:40:16.676933] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.951 [2024-10-14 13:40:16.676943] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=512, cccid=6 00:29:24.951 [2024-10-14 13:40:16.676957] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fd40) on tqpair(0x18e5210): expected_datao=0, payload_size=512 00:29:24.951 [2024-10-14 13:40:16.676964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676973] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676981] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.676989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.951 [2024-10-14 13:40:16.676997] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.951 [2024-10-14 13:40:16.677004] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.677010] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e5210): datao=0, datal=4096, cccid=7 00:29:24.951 [2024-10-14 13:40:16.677017] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x194fec0) on tqpair(0x18e5210): expected_datao=0, payload_size=4096 00:29:24.951 [2024-10-14 13:40:16.677024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.677033] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.677040] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.717209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.951 [2024-10-14 13:40:16.717228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.951 [2024-10-14 13:40:16.717236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.717243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fbc0) on tqpair=0x18e5210 00:29:24.951 [2024-10-14 13:40:16.717262] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.951 [2024-10-14 13:40:16.717273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.951 [2024-10-14 13:40:16.717280] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.717287] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fa40) on tqpair=0x18e5210 00:29:24.951 [2024-10-14 13:40:16.717304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.951 [2024-10-14 13:40:16.717315] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.951 [2024-10-14 13:40:16.717321] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.717328] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fd40) on tqpair=0x18e5210 00:29:24.951 [2024-10-14 13:40:16.717338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.951 [2024-10-14 13:40:16.717348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.951 [2024-10-14 13:40:16.717354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.951 [2024-10-14 13:40:16.717360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fec0) on tqpair=0x18e5210 00:29:24.951 ===================================================== 00:29:24.951 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.951 ===================================================== 00:29:24.951 Controller Capabilities/Features 00:29:24.951 ================================ 00:29:24.951 Vendor ID: 8086 00:29:24.951 Subsystem Vendor ID: 8086 00:29:24.951 Serial Number: SPDK00000000000001 00:29:24.951 Model Number: SPDK bdev Controller 00:29:24.951 Firmware Version: 25.01 00:29:24.951 Recommended Arb Burst: 6 00:29:24.951 IEEE OUI Identifier: e4 d2 5c 00:29:24.951 Multi-path I/O 00:29:24.951 May have multiple subsystem ports: Yes 00:29:24.951 May have multiple controllers: Yes 00:29:24.951 Associated with SR-IOV VF: No 00:29:24.951 Max Data Transfer Size: 131072 00:29:24.951 Max Number of Namespaces: 32 00:29:24.951 Max Number of I/O Queues: 127 00:29:24.951 NVMe Specification Version (VS): 1.3 00:29:24.951 NVMe Specification Version (Identify): 1.3 00:29:24.951 Maximum Queue Entries: 128 00:29:24.951 Contiguous Queues Required: Yes 00:29:24.951 Arbitration Mechanisms Supported 00:29:24.951 Weighted Round Robin: Not Supported 00:29:24.951 Vendor Specific: Not Supported 00:29:24.951 Reset Timeout: 15000 ms 00:29:24.951 Doorbell Stride: 4 bytes 00:29:24.951 NVM Subsystem Reset: Not Supported 00:29:24.951 Command Sets Supported 00:29:24.951 NVM Command Set: Supported 00:29:24.951 Boot Partition: Not Supported 00:29:24.951 Memory Page Size Minimum: 4096 bytes 00:29:24.951 Memory Page Size Maximum: 4096 bytes 00:29:24.951 Persistent Memory Region: Not Supported 00:29:24.951 Optional Asynchronous Events Supported 00:29:24.951 Namespace Attribute Notices: Supported 00:29:24.951 Firmware Activation Notices: Not Supported 00:29:24.951 ANA Change Notices: Not Supported 00:29:24.951 PLE Aggregate Log Change Notices: Not Supported 00:29:24.951 LBA Status Info Alert Notices: Not Supported 00:29:24.951 EGE Aggregate Log Change Notices: Not Supported 00:29:24.951 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.951 Zone Descriptor Change Notices: Not Supported 00:29:24.951 Discovery Log Change Notices: Not Supported 00:29:24.951 Controller Attributes 00:29:24.951 128-bit Host Identifier: Supported 00:29:24.951 Non-Operational Permissive Mode: Not Supported 00:29:24.951 NVM Sets: Not Supported 00:29:24.951 Read Recovery Levels: Not Supported 00:29:24.951 Endurance Groups: Not Supported 00:29:24.951 Predictable Latency Mode: Not Supported 00:29:24.951 Traffic Based Keep ALive: Not Supported 00:29:24.951 Namespace Granularity: Not Supported 00:29:24.951 SQ Associations: Not Supported 00:29:24.951 UUID List: Not Supported 00:29:24.951 Multi-Domain Subsystem: Not Supported 00:29:24.951 Fixed Capacity Management: Not Supported 00:29:24.951 Variable Capacity Management: Not Supported 00:29:24.951 Delete Endurance Group: Not Supported 00:29:24.951 Delete NVM Set: Not Supported 00:29:24.951 Extended LBA Formats Supported: Not Supported 00:29:24.951 Flexible Data Placement Supported: Not Supported 00:29:24.951 00:29:24.951 Controller Memory Buffer Support 00:29:24.951 ================================ 00:29:24.951 Supported: No 00:29:24.951 00:29:24.951 Persistent Memory Region Support 00:29:24.951 ================================ 00:29:24.951 Supported: No 00:29:24.951 00:29:24.951 Admin Command Set Attributes 00:29:24.951 ============================ 00:29:24.951 Security Send/Receive: Not Supported 00:29:24.951 Format NVM: Not Supported 00:29:24.951 Firmware Activate/Download: Not Supported 00:29:24.951 Namespace Management: Not Supported 00:29:24.952 Device Self-Test: Not Supported 00:29:24.952 Directives: Not Supported 00:29:24.952 NVMe-MI: Not Supported 00:29:24.952 Virtualization Management: Not Supported 00:29:24.952 Doorbell Buffer Config: Not Supported 00:29:24.952 Get LBA Status Capability: Not Supported 00:29:24.952 Command & Feature Lockdown Capability: Not Supported 00:29:24.952 Abort Command Limit: 4 00:29:24.952 Async Event Request Limit: 4 00:29:24.952 Number of Firmware Slots: N/A 00:29:24.952 Firmware Slot 1 Read-Only: N/A 00:29:24.952 Firmware Activation Without Reset: N/A 00:29:24.952 Multiple Update Detection Support: N/A 00:29:24.952 Firmware Update Granularity: No Information Provided 00:29:24.952 Per-Namespace SMART Log: No 00:29:24.952 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.952 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:24.952 Command Effects Log Page: Supported 00:29:24.952 Get Log Page Extended Data: Supported 00:29:24.952 Telemetry Log Pages: Not Supported 00:29:24.952 Persistent Event Log Pages: Not Supported 00:29:24.952 Supported Log Pages Log Page: May Support 00:29:24.952 Commands Supported & Effects Log Page: Not Supported 00:29:24.952 Feature Identifiers & Effects Log Page:May Support 00:29:24.952 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.952 Data Area 4 for Telemetry Log: Not Supported 00:29:24.952 Error Log Page Entries Supported: 128 00:29:24.952 Keep Alive: Supported 00:29:24.952 Keep Alive Granularity: 10000 ms 00:29:24.952 00:29:24.952 NVM Command Set Attributes 00:29:24.952 ========================== 00:29:24.952 Submission Queue Entry Size 00:29:24.952 Max: 64 00:29:24.952 Min: 64 00:29:24.952 Completion Queue Entry Size 00:29:24.952 Max: 16 00:29:24.952 Min: 16 00:29:24.952 Number of Namespaces: 32 00:29:24.952 Compare Command: Supported 00:29:24.952 Write Uncorrectable Command: Not Supported 00:29:24.952 Dataset Management Command: Supported 00:29:24.952 Write Zeroes Command: Supported 00:29:24.952 Set Features Save Field: Not Supported 00:29:24.952 Reservations: Supported 00:29:24.952 Timestamp: Not Supported 00:29:24.952 Copy: Supported 00:29:24.952 Volatile Write Cache: Present 00:29:24.952 Atomic Write Unit (Normal): 1 00:29:24.952 Atomic Write Unit (PFail): 1 00:29:24.952 Atomic Compare & Write Unit: 1 00:29:24.952 Fused Compare & Write: Supported 00:29:24.952 Scatter-Gather List 00:29:24.952 SGL Command Set: Supported 00:29:24.952 SGL Keyed: Supported 00:29:24.952 SGL Bit Bucket Descriptor: Not Supported 00:29:24.952 SGL Metadata Pointer: Not Supported 00:29:24.952 Oversized SGL: Not Supported 00:29:24.952 SGL Metadata Address: Not Supported 00:29:24.952 SGL Offset: Supported 00:29:24.952 Transport SGL Data Block: Not Supported 00:29:24.952 Replay Protected Memory Block: Not Supported 00:29:24.952 00:29:24.952 Firmware Slot Information 00:29:24.952 ========================= 00:29:24.952 Active slot: 1 00:29:24.952 Slot 1 Firmware Revision: 25.01 00:29:24.952 00:29:24.952 00:29:24.952 Commands Supported and Effects 00:29:24.952 ============================== 00:29:24.952 Admin Commands 00:29:24.952 -------------- 00:29:24.952 Get Log Page (02h): Supported 00:29:24.952 Identify (06h): Supported 00:29:24.952 Abort (08h): Supported 00:29:24.952 Set Features (09h): Supported 00:29:24.952 Get Features (0Ah): Supported 00:29:24.952 Asynchronous Event Request (0Ch): Supported 00:29:24.952 Keep Alive (18h): Supported 00:29:24.952 I/O Commands 00:29:24.952 ------------ 00:29:24.952 Flush (00h): Supported LBA-Change 00:29:24.952 Write (01h): Supported LBA-Change 00:29:24.952 Read (02h): Supported 00:29:24.952 Compare (05h): Supported 00:29:24.952 Write Zeroes (08h): Supported LBA-Change 00:29:24.952 Dataset Management (09h): Supported LBA-Change 00:29:24.952 Copy (19h): Supported LBA-Change 00:29:24.952 00:29:24.952 Error Log 00:29:24.952 ========= 00:29:24.952 00:29:24.952 Arbitration 00:29:24.952 =========== 00:29:24.952 Arbitration Burst: 1 00:29:24.952 00:29:24.952 Power Management 00:29:24.952 ================ 00:29:24.952 Number of Power States: 1 00:29:24.952 Current Power State: Power State #0 00:29:24.952 Power State #0: 00:29:24.952 Max Power: 0.00 W 00:29:24.952 Non-Operational State: Operational 00:29:24.952 Entry Latency: Not Reported 00:29:24.952 Exit Latency: Not Reported 00:29:24.952 Relative Read Throughput: 0 00:29:24.952 Relative Read Latency: 0 00:29:24.952 Relative Write Throughput: 0 00:29:24.952 Relative Write Latency: 0 00:29:24.952 Idle Power: Not Reported 00:29:24.952 Active Power: Not Reported 00:29:24.952 Non-Operational Permissive Mode: Not Supported 00:29:24.952 00:29:24.952 Health Information 00:29:24.952 ================== 00:29:24.952 Critical Warnings: 00:29:24.952 Available Spare Space: OK 00:29:24.952 Temperature: OK 00:29:24.952 Device Reliability: OK 00:29:24.952 Read Only: No 00:29:24.952 Volatile Memory Backup: OK 00:29:24.952 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:24.952 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:24.952 Available Spare: 0% 00:29:24.952 Available Spare Threshold: 0% 00:29:24.952 Life Percentage Used:[2024-10-14 13:40:16.717475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.952 [2024-10-14 13:40:16.717487] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18e5210) 00:29:24.952 [2024-10-14 13:40:16.717499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.952 [2024-10-14 13:40:16.717523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194fec0, cid 7, qid 0 00:29:24.952 [2024-10-14 13:40:16.717623] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.952 [2024-10-14 13:40:16.717637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.952 [2024-10-14 13:40:16.717644] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.952 [2024-10-14 13:40:16.717651] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194fec0) on tqpair=0x18e5210 00:29:24.952 [2024-10-14 13:40:16.717703] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:24.952 [2024-10-14 13:40:16.717723] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f440) on tqpair=0x18e5210 00:29:24.952 [2024-10-14 13:40:16.717733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.952 [2024-10-14 13:40:16.717742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f5c0) on tqpair=0x18e5210 00:29:24.952 [2024-10-14 13:40:16.717750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.952 [2024-10-14 13:40:16.717758] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f740) on tqpair=0x18e5210 00:29:24.952 [2024-10-14 13:40:16.717765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.952 [2024-10-14 13:40:16.717773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.952 [2024-10-14 13:40:16.717781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.952 [2024-10-14 13:40:16.717793] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.952 [2024-10-14 13:40:16.717801] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.717808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.717818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.717842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.717952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.717964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.717971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.717978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.717989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.717997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.718142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.718155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.718162] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718169] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.718176] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:24.953 [2024-10-14 13:40:16.718184] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:24.953 [2024-10-14 13:40:16.718199] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718208] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718214] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.718344] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.718358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.718365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.718388] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718397] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718435] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.718511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.718523] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.718530] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.718552] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.718684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.718698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.718705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718711] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.718727] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718743] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.718843] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.718855] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.718862] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.718884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.718900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.718910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.718930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.719002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.719018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.719025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.719032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.719048] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.719057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.719064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.719074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.719094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.723142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.723159] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.723173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.723180] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.723197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.723207] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.723213] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e5210) 00:29:24.953 [2024-10-14 13:40:16.723224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.953 [2024-10-14 13:40:16.723247] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x194f8c0, cid 3, qid 0 00:29:24.953 [2024-10-14 13:40:16.723335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.953 [2024-10-14 13:40:16.723349] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.953 [2024-10-14 13:40:16.723356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.953 [2024-10-14 13:40:16.723363] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x194f8c0) on tqpair=0x18e5210 00:29:24.953 [2024-10-14 13:40:16.723376] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:24.953 0% 00:29:24.953 Data Units Read: 0 00:29:24.953 Data Units Written: 0 00:29:24.953 Host Read Commands: 0 00:29:24.953 Host Write Commands: 0 00:29:24.953 Controller Busy Time: 0 minutes 00:29:24.953 Power Cycles: 0 00:29:24.953 Power On Hours: 0 hours 00:29:24.953 Unsafe Shutdowns: 0 00:29:24.953 Unrecoverable Media Errors: 0 00:29:24.953 Lifetime Error Log Entries: 0 00:29:24.953 Warning Temperature Time: 0 minutes 00:29:24.953 Critical Temperature Time: 0 minutes 00:29:24.953 00:29:24.953 Number of Queues 00:29:24.953 ================ 00:29:24.953 Number of I/O Submission Queues: 127 00:29:24.953 Number of I/O Completion Queues: 127 00:29:24.953 00:29:24.953 Active Namespaces 00:29:24.953 ================= 00:29:24.953 Namespace ID:1 00:29:24.953 Error Recovery Timeout: Unlimited 00:29:24.953 Command Set Identifier: NVM (00h) 00:29:24.953 Deallocate: Supported 00:29:24.953 Deallocated/Unwritten Error: Not Supported 00:29:24.953 Deallocated Read Value: Unknown 00:29:24.953 Deallocate in Write Zeroes: Not Supported 00:29:24.953 Deallocated Guard Field: 0xFFFF 00:29:24.953 Flush: Supported 00:29:24.953 Reservation: Supported 00:29:24.953 Namespace Sharing Capabilities: Multiple Controllers 00:29:24.953 Size (in LBAs): 131072 (0GiB) 00:29:24.953 Capacity (in LBAs): 131072 (0GiB) 00:29:24.953 Utilization (in LBAs): 131072 (0GiB) 00:29:24.953 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:24.953 EUI64: ABCDEF0123456789 00:29:24.953 UUID: 8b65f60e-9392-4b7b-adba-c22d88ab44e0 00:29:24.953 Thin Provisioning: Not Supported 00:29:24.953 Per-NS Atomic Units: Yes 00:29:24.953 Atomic Boundary Size (Normal): 0 00:29:24.953 Atomic Boundary Size (PFail): 0 00:29:24.953 Atomic Boundary Offset: 0 00:29:24.953 Maximum Single Source Range Length: 65535 00:29:24.953 Maximum Copy Length: 65535 00:29:24.953 Maximum Source Range Count: 1 00:29:24.953 NGUID/EUI64 Never Reused: No 00:29:24.953 Namespace Write Protected: No 00:29:24.953 Number of LBA Formats: 1 00:29:24.953 Current LBA Format: LBA Format #00 00:29:24.953 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:24.953 00:29:24.953 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.954 rmmod nvme_tcp 00:29:24.954 rmmod nvme_fabrics 00:29:24.954 rmmod nvme_keyring 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:24.954 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 336429 ']' 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 336429 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 336429 ']' 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 336429 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 336429 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 336429' 00:29:25.213 killing process with pid 336429 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 336429 00:29:25.213 13:40:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 336429 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.477 13:40:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.384 00:29:27.384 real 0m5.672s 00:29:27.384 user 0m5.036s 00:29:27.384 sys 0m1.973s 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:27.384 ************************************ 00:29:27.384 END TEST nvmf_identify 00:29:27.384 ************************************ 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.384 ************************************ 00:29:27.384 START TEST nvmf_perf 00:29:27.384 ************************************ 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.384 * Looking for test storage... 00:29:27.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:27.384 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.643 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:27.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.643 --rc genhtml_branch_coverage=1 00:29:27.643 --rc genhtml_function_coverage=1 00:29:27.644 --rc genhtml_legend=1 00:29:27.644 --rc geninfo_all_blocks=1 00:29:27.644 --rc geninfo_unexecuted_blocks=1 00:29:27.644 00:29:27.644 ' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.644 --rc genhtml_branch_coverage=1 00:29:27.644 --rc genhtml_function_coverage=1 00:29:27.644 --rc genhtml_legend=1 00:29:27.644 --rc geninfo_all_blocks=1 00:29:27.644 --rc geninfo_unexecuted_blocks=1 00:29:27.644 00:29:27.644 ' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.644 --rc genhtml_branch_coverage=1 00:29:27.644 --rc genhtml_function_coverage=1 00:29:27.644 --rc genhtml_legend=1 00:29:27.644 --rc geninfo_all_blocks=1 00:29:27.644 --rc geninfo_unexecuted_blocks=1 00:29:27.644 00:29:27.644 ' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:27.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.644 --rc genhtml_branch_coverage=1 00:29:27.644 --rc genhtml_function_coverage=1 00:29:27.644 --rc genhtml_legend=1 00:29:27.644 --rc geninfo_all_blocks=1 00:29:27.644 --rc geninfo_unexecuted_blocks=1 00:29:27.644 00:29:27.644 ' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:27.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.644 13:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.179 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.179 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.179 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:29:30.180 00:29:30.180 --- 10.0.0.2 ping statistics --- 00:29:30.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.180 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:29:30.180 00:29:30.180 --- 10.0.0.1 ping statistics --- 00:29:30.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.180 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=338522 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 338522 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 338522 ']' 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 [2024-10-14 13:40:21.721018] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:29:30.180 [2024-10-14 13:40:21.721099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.180 [2024-10-14 13:40:21.787332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.180 [2024-10-14 13:40:21.833349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.180 [2024-10-14 13:40:21.833424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.180 [2024-10-14 13:40:21.833438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.180 [2024-10-14 13:40:21.833449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.180 [2024-10-14 13:40:21.833457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.180 [2024-10-14 13:40:21.835066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.180 [2024-10-14 13:40:21.835173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.180 [2024-10-14 13:40:21.835198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.180 [2024-10-14 13:40:21.835202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:30.180 13:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:33.459 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:33.459 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:33.716 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:33.716 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:33.974 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:33.974 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:33.974 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:33.974 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:33.974 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:34.231 [2024-10-14 13:40:25.909501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.231 13:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.489 13:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:34.489 13:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.746 13:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:34.746 13:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:35.004 13:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.262 [2024-10-14 13:40:26.989447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.262 13:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.519 13:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:35.519 13:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:35.519 13:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:35.520 13:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:36.918 Initializing NVMe Controllers 00:29:36.918 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:36.918 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:36.918 Initialization complete. Launching workers. 00:29:36.918 ======================================================== 00:29:36.918 Latency(us) 00:29:36.918 Device Information : IOPS MiB/s Average min max 00:29:36.918 PCIE (0000:88:00.0) NSID 1 from core 0: 86193.45 336.69 370.63 31.83 4508.30 00:29:36.918 ======================================================== 00:29:36.918 Total : 86193.45 336.69 370.63 31.83 4508.30 00:29:36.918 00:29:36.918 13:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.290 Initializing NVMe Controllers 00:29:38.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.290 Initialization complete. Launching workers. 00:29:38.290 ======================================================== 00:29:38.290 Latency(us) 00:29:38.290 Device Information : IOPS MiB/s Average min max 00:29:38.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 124.00 0.48 8326.83 137.35 44969.53 00:29:38.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 20989.22 7954.88 51890.20 00:29:38.290 ======================================================== 00:29:38.290 Total : 172.00 0.67 11860.52 137.35 51890.20 00:29:38.290 00:29:38.290 13:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.223 Initializing NVMe Controllers 00:29:39.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.223 Initialization complete. Launching workers. 00:29:39.223 ======================================================== 00:29:39.223 Latency(us) 00:29:39.223 Device Information : IOPS MiB/s Average min max 00:29:39.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8407.00 32.84 3806.92 619.03 7500.59 00:29:39.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3954.00 15.45 8134.93 6834.61 15728.06 00:29:39.223 ======================================================== 00:29:39.223 Total : 12361.00 48.29 5191.35 619.03 15728.06 00:29:39.223 00:29:39.223 13:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:39.223 13:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:39.223 13:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.751 Initializing NVMe Controllers 00:29:41.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.751 Controller IO queue size 128, less than required. 00:29:41.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.751 Controller IO queue size 128, less than required. 00:29:41.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:41.751 Initialization complete. Launching workers. 00:29:41.751 ======================================================== 00:29:41.751 Latency(us) 00:29:41.751 Device Information : IOPS MiB/s Average min max 00:29:41.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.95 429.74 75376.32 48677.55 121583.25 00:29:41.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.48 146.62 229584.03 90665.27 338465.05 00:29:41.751 ======================================================== 00:29:41.751 Total : 2305.43 576.36 114605.48 48677.55 338465.05 00:29:41.751 00:29:41.751 13:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:42.317 No valid NVMe controllers or AIO or URING devices found 00:29:42.317 Initializing NVMe Controllers 00:29:42.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.317 Controller IO queue size 128, less than required. 00:29:42.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.317 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:42.317 Controller IO queue size 128, less than required. 00:29:42.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.317 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:42.317 WARNING: Some requested NVMe devices were skipped 00:29:42.317 13:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:44.845 Initializing NVMe Controllers 00:29:44.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.845 Controller IO queue size 128, less than required. 00:29:44.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.845 Controller IO queue size 128, less than required. 00:29:44.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:44.846 Initialization complete. Launching workers. 00:29:44.846 00:29:44.846 ==================== 00:29:44.846 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:44.846 TCP transport: 00:29:44.846 polls: 9251 00:29:44.846 idle_polls: 6064 00:29:44.846 sock_completions: 3187 00:29:44.846 nvme_completions: 5977 00:29:44.846 submitted_requests: 8948 00:29:44.846 queued_requests: 1 00:29:44.846 00:29:44.846 ==================== 00:29:44.846 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:44.846 TCP transport: 00:29:44.846 polls: 12344 00:29:44.846 idle_polls: 8624 00:29:44.846 sock_completions: 3720 00:29:44.846 nvme_completions: 6509 00:29:44.846 submitted_requests: 9688 00:29:44.846 queued_requests: 1 00:29:44.846 ======================================================== 00:29:44.846 Latency(us) 00:29:44.846 Device Information : IOPS MiB/s Average min max 00:29:44.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1491.59 372.90 87973.13 66475.93 154049.33 00:29:44.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1624.37 406.09 78883.35 40741.25 110853.40 00:29:44.846 ======================================================== 00:29:44.846 Total : 3115.96 778.99 83234.56 40741.25 154049.33 00:29:44.846 00:29:44.846 13:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:44.846 13:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.104 13:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:45.104 13:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:45.104 13:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=39409808-a59d-4a6c-9578-6a9db83f3262 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 39409808-a59d-4a6c-9578-6a9db83f3262 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=39409808-a59d-4a6c-9578-6a9db83f3262 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:48.382 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.640 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:48.640 { 00:29:48.640 "uuid": "39409808-a59d-4a6c-9578-6a9db83f3262", 00:29:48.640 "name": "lvs_0", 00:29:48.640 "base_bdev": "Nvme0n1", 00:29:48.640 "total_data_clusters": 238234, 00:29:48.640 "free_clusters": 238234, 00:29:48.640 "block_size": 512, 00:29:48.640 "cluster_size": 4194304 00:29:48.640 } 00:29:48.640 ]' 00:29:48.640 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="39409808-a59d-4a6c-9578-6a9db83f3262") .free_clusters' 00:29:48.640 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:48.640 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="39409808-a59d-4a6c-9578-6a9db83f3262") .cluster_size' 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:48.897 952936 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:48.897 13:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39409808-a59d-4a6c-9578-6a9db83f3262 lbd_0 20480 00:29:49.462 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=68c2d243-0763-4bc8-a992-b149b3710d15 00:29:49.462 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 68c2d243-0763-4bc8-a992-b149b3710d15 lvs_n_0 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=7fad9a01-4186-4a39-b517-3170ff34f0ca 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 7fad9a01-4186-4a39-b517-3170ff34f0ca 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=7fad9a01-4186-4a39-b517-3170ff34f0ca 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:50.395 13:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.395 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:50.395 { 00:29:50.395 "uuid": "39409808-a59d-4a6c-9578-6a9db83f3262", 00:29:50.395 "name": "lvs_0", 00:29:50.395 "base_bdev": "Nvme0n1", 00:29:50.395 "total_data_clusters": 238234, 00:29:50.395 "free_clusters": 233114, 00:29:50.395 "block_size": 512, 00:29:50.395 "cluster_size": 4194304 00:29:50.395 }, 00:29:50.395 { 00:29:50.395 "uuid": "7fad9a01-4186-4a39-b517-3170ff34f0ca", 00:29:50.395 "name": "lvs_n_0", 00:29:50.395 "base_bdev": "68c2d243-0763-4bc8-a992-b149b3710d15", 00:29:50.395 "total_data_clusters": 5114, 00:29:50.395 "free_clusters": 5114, 00:29:50.395 "block_size": 512, 00:29:50.395 "cluster_size": 4194304 00:29:50.395 } 00:29:50.395 ]' 00:29:50.395 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7fad9a01-4186-4a39-b517-3170ff34f0ca") .free_clusters' 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="7fad9a01-4186-4a39-b517-3170ff34f0ca") .cluster_size' 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:50.653 20456 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:50.653 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fad9a01-4186-4a39-b517-3170ff34f0ca lbd_nest_0 20456 00:29:50.910 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8243fde4-7a3b-4726-aa9a-6a9fe88bb9c2 00:29:50.910 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.167 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:51.167 13:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8243fde4-7a3b-4726-aa9a-6a9fe88bb9c2 00:29:51.424 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.682 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:51.682 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:51.682 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:51.682 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:51.682 13:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:03.877 Initializing NVMe Controllers 00:30:03.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:03.877 Initialization complete. Launching workers. 00:30:03.877 ======================================================== 00:30:03.877 Latency(us) 00:30:03.877 Device Information : IOPS MiB/s Average min max 00:30:03.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.70 0.02 21937.22 165.95 48730.08 00:30:03.877 ======================================================== 00:30:03.877 Total : 45.70 0.02 21937.22 165.95 48730.08 00:30:03.877 00:30:03.877 13:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:03.877 13:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.839 Initializing NVMe Controllers 00:30:13.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.839 Initialization complete. Launching workers. 00:30:13.839 ======================================================== 00:30:13.839 Latency(us) 00:30:13.839 Device Information : IOPS MiB/s Average min max 00:30:13.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.20 9.78 12792.94 4028.71 50911.29 00:30:13.839 ======================================================== 00:30:13.839 Total : 78.20 9.78 12792.94 4028.71 50911.29 00:30:13.839 00:30:13.839 13:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:13.839 13:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:13.839 13:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.803 Initializing NVMe Controllers 00:30:23.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.803 Initialization complete. Launching workers. 00:30:23.803 ======================================================== 00:30:23.803 Latency(us) 00:30:23.803 Device Information : IOPS MiB/s Average min max 00:30:23.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7043.87 3.44 4545.30 303.41 45401.68 00:30:23.803 ======================================================== 00:30:23.803 Total : 7043.87 3.44 4545.30 303.41 45401.68 00:30:23.803 00:30:23.803 13:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:23.803 13:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.769 Initializing NVMe Controllers 00:30:33.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.769 Initialization complete. Launching workers. 00:30:33.769 ======================================================== 00:30:33.769 Latency(us) 00:30:33.769 Device Information : IOPS MiB/s Average min max 00:30:33.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3786.17 473.27 8452.95 1390.48 18704.19 00:30:33.769 ======================================================== 00:30:33.769 Total : 3786.17 473.27 8452.95 1390.48 18704.19 00:30:33.769 00:30:33.769 13:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.769 13:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.769 13:41:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.729 Initializing NVMe Controllers 00:30:43.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.729 Controller IO queue size 128, less than required. 00:30:43.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:43.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.729 Initialization complete. Launching workers. 00:30:43.729 ======================================================== 00:30:43.729 Latency(us) 00:30:43.729 Device Information : IOPS MiB/s Average min max 00:30:43.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11679.46 5.70 10964.81 1818.47 26824.77 00:30:43.729 ======================================================== 00:30:43.729 Total : 11679.46 5.70 10964.81 1818.47 26824.77 00:30:43.729 00:30:43.729 13:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:43.729 13:41:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:53.692 Initializing NVMe Controllers 00:30:53.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.692 Controller IO queue size 128, less than required. 00:30:53.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:53.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:53.692 Initialization complete. Launching workers. 00:30:53.692 ======================================================== 00:30:53.692 Latency(us) 00:30:53.692 Device Information : IOPS MiB/s Average min max 00:30:53.692 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1175.77 146.97 109740.24 24257.29 229290.11 00:30:53.692 ======================================================== 00:30:53.692 Total : 1175.77 146.97 109740.24 24257.29 229290.11 00:30:53.692 00:30:53.692 13:41:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.950 13:41:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8243fde4-7a3b-4726-aa9a-6a9fe88bb9c2 00:30:54.884 13:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:55.141 13:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68c2d243-0763-4bc8-a992-b149b3710d15 00:30:55.399 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.657 rmmod nvme_tcp 00:30:55.657 rmmod nvme_fabrics 00:30:55.657 rmmod nvme_keyring 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 338522 ']' 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 338522 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 338522 ']' 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 338522 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.657 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 338522 00:30:55.915 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:55.915 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:55.915 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 338522' 00:30:55.915 killing process with pid 338522 00:30:55.915 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 338522 00:30:55.915 13:41:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 338522 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.296 13:41:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.832 00:30:59.832 real 1m31.950s 00:30:59.832 user 5m37.210s 00:30:59.832 sys 0m16.815s 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:59.832 ************************************ 00:30:59.832 END TEST nvmf_perf 00:30:59.832 ************************************ 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.832 ************************************ 00:30:59.832 START TEST nvmf_fio_host 00:30:59.832 ************************************ 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:59.832 * Looking for test storage... 00:30:59.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:59.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.832 --rc genhtml_branch_coverage=1 00:30:59.832 --rc genhtml_function_coverage=1 00:30:59.832 --rc genhtml_legend=1 00:30:59.832 --rc geninfo_all_blocks=1 00:30:59.832 --rc geninfo_unexecuted_blocks=1 00:30:59.832 00:30:59.832 ' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:59.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.832 --rc genhtml_branch_coverage=1 00:30:59.832 --rc genhtml_function_coverage=1 00:30:59.832 --rc genhtml_legend=1 00:30:59.832 --rc geninfo_all_blocks=1 00:30:59.832 --rc geninfo_unexecuted_blocks=1 00:30:59.832 00:30:59.832 ' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:59.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.832 --rc genhtml_branch_coverage=1 00:30:59.832 --rc genhtml_function_coverage=1 00:30:59.832 --rc genhtml_legend=1 00:30:59.832 --rc geninfo_all_blocks=1 00:30:59.832 --rc geninfo_unexecuted_blocks=1 00:30:59.832 00:30:59.832 ' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:59.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.832 --rc genhtml_branch_coverage=1 00:30:59.832 --rc genhtml_function_coverage=1 00:30:59.832 --rc genhtml_legend=1 00:30:59.832 --rc geninfo_all_blocks=1 00:30:59.832 --rc geninfo_unexecuted_blocks=1 00:30:59.832 00:30:59.832 ' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.832 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:59.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.833 13:41:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.740 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.741 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.741 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.741 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.741 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:31:02.000 00:31:02.000 --- 10.0.0.2 ping statistics --- 00:31:02.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.000 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:31:02.000 00:31:02.000 --- 10.0.0.1 ping statistics --- 00:31:02.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.000 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=350498 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 350498 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 350498 ']' 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:02.000 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.000 [2024-10-14 13:41:53.734727] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:31:02.000 [2024-10-14 13:41:53.734813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.000 [2024-10-14 13:41:53.799931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.000 [2024-10-14 13:41:53.849497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.000 [2024-10-14 13:41:53.849549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.000 [2024-10-14 13:41:53.849580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.000 [2024-10-14 13:41:53.849592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.000 [2024-10-14 13:41:53.849602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.000 [2024-10-14 13:41:53.851207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.000 [2024-10-14 13:41:53.851275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.000 [2024-10-14 13:41:53.851341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.000 [2024-10-14 13:41:53.851345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.259 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:02.259 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:02.259 13:41:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:02.517 [2024-10-14 13:41:54.210126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.517 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:02.517 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.517 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.517 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:02.775 Malloc1 00:31:02.775 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:03.033 13:41:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:03.291 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.549 [2024-10-14 13:41:55.371287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.549 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:03.808 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:04.066 13:41:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:04.066 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:04.066 fio-3.35 00:31:04.066 Starting 1 thread 00:31:06.594 00:31:06.594 test: (groupid=0, jobs=1): err= 0: pid=350869: Mon Oct 14 13:41:58 2024 00:31:06.594 read: IOPS=8269, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2007msec) 00:31:06.594 slat (nsec): min=1810, max=124474, avg=2358.74, stdev=1545.49 00:31:06.594 clat (usec): min=2568, max=14829, avg=8452.61, stdev=687.19 00:31:06.594 lat (usec): min=2592, max=14831, avg=8454.97, stdev=687.10 00:31:06.594 clat percentiles (usec): 00:31:06.594 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 7898], 00:31:06.594 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:31:06.594 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:31:06.594 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[11469], 99.95th=[13698], 00:31:06.594 | 99.99th=[14746] 00:31:06.594 bw ( KiB/s): min=32152, max=33568, per=99.89%, avg=33042.00, stdev=618.55, samples=4 00:31:06.594 iops : min= 8038, max= 8392, avg=8260.50, stdev=154.64, samples=4 00:31:06.594 write: IOPS=8269, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2007msec); 0 zone resets 00:31:06.594 slat (nsec): min=1968, max=115010, avg=2464.64, stdev=1285.39 00:31:06.594 clat (usec): min=1094, max=13779, avg=6966.98, stdev=568.43 00:31:06.594 lat (usec): min=1100, max=13782, avg=6969.45, stdev=568.38 00:31:06.594 clat percentiles (usec): 00:31:06.594 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:31:06.594 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:31:06.595 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7767], 00:31:06.595 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[10945], 99.95th=[11469], 00:31:06.595 | 99.99th=[13698] 00:31:06.595 bw ( KiB/s): min=32720, max=33456, per=100.00%, avg=33090.00, stdev=317.08, samples=4 00:31:06.595 iops : min= 8180, max= 8364, avg=8272.50, stdev=79.27, samples=4 00:31:06.595 lat (msec) : 2=0.03%, 4=0.10%, 10=99.50%, 20=0.38% 00:31:06.595 cpu : usr=65.85%, sys=32.65%, ctx=110, majf=0, minf=36 00:31:06.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.595 issued rwts: total=16597,16597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.595 00:31:06.595 Run status group 0 (all jobs): 00:31:06.595 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (68.0MB), run=2007-2007msec 00:31:06.595 WRITE: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (68.0MB), run=2007-2007msec 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:06.595 13:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:06.595 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:06.595 fio-3.35 00:31:06.595 Starting 1 thread 00:31:08.495 [2024-10-14 13:42:00.052764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161a9d0 is same with the state(6) to be set 00:31:09.062 00:31:09.062 test: (groupid=0, jobs=1): err= 0: pid=351304: Mon Oct 14 13:42:00 2024 00:31:09.062 read: IOPS=8247, BW=129MiB/s (135MB/s)(259MiB/2006msec) 00:31:09.062 slat (nsec): min=2847, max=93912, avg=3780.79, stdev=1779.49 00:31:09.062 clat (usec): min=2846, max=17428, avg=8912.82, stdev=2085.06 00:31:09.062 lat (usec): min=2850, max=17430, avg=8916.60, stdev=2085.10 00:31:09.062 clat percentiles (usec): 00:31:09.062 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7111], 00:31:09.062 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:31:09.062 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11469], 95.00th=[12518], 00:31:09.062 | 99.00th=[14615], 99.50th=[15533], 99.90th=[16319], 99.95th=[16450], 00:31:09.062 | 99.99th=[17433] 00:31:09.062 bw ( KiB/s): min=61760, max=76608, per=51.50%, avg=67960.00, stdev=7320.77, samples=4 00:31:09.062 iops : min= 3860, max= 4788, avg=4247.50, stdev=457.55, samples=4 00:31:09.062 write: IOPS=4900, BW=76.6MiB/s (80.3MB/s)(139MiB/1817msec); 0 zone resets 00:31:09.062 slat (usec): min=30, max=146, avg=34.01, stdev= 5.25 00:31:09.062 clat (usec): min=6199, max=19946, avg=11524.20, stdev=2018.40 00:31:09.062 lat (usec): min=6231, max=19997, avg=11558.21, stdev=2018.44 00:31:09.062 clat percentiles (usec): 00:31:09.062 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:09.062 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:31:09.062 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14353], 95.00th=[15139], 00:31:09.062 | 99.00th=[16319], 99.50th=[17171], 99.90th=[19006], 99.95th=[19530], 00:31:09.062 | 99.99th=[20055] 00:31:09.062 bw ( KiB/s): min=64192, max=78784, per=90.11%, avg=70656.00, stdev=7311.14, samples=4 00:31:09.062 iops : min= 4012, max= 4924, avg=4416.00, stdev=456.95, samples=4 00:31:09.062 lat (msec) : 4=0.10%, 10=54.36%, 20=45.54% 00:31:09.062 cpu : usr=76.36%, sys=22.44%, ctx=45, majf=0, minf=58 00:31:09.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:09.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:09.062 issued rwts: total=16545,8905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:09.062 00:31:09.062 Run status group 0 (all jobs): 00:31:09.062 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2006-2006msec 00:31:09.062 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=139MiB (146MB), run=1817-1817msec 00:31:09.062 13:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:09.321 13:42:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:12.680 Nvme0n1 00:31:12.680 13:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9e5f4003-1374-4539-914d-230444f78c3c 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9e5f4003-1374-4539-914d-230444f78c3c 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=9e5f4003-1374-4539-914d-230444f78c3c 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:15.370 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:15.628 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:15.628 { 00:31:15.628 "uuid": "9e5f4003-1374-4539-914d-230444f78c3c", 00:31:15.628 "name": "lvs_0", 00:31:15.628 "base_bdev": "Nvme0n1", 00:31:15.628 "total_data_clusters": 930, 00:31:15.628 "free_clusters": 930, 00:31:15.628 "block_size": 512, 00:31:15.628 "cluster_size": 1073741824 00:31:15.628 } 00:31:15.628 ]' 00:31:15.628 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9e5f4003-1374-4539-914d-230444f78c3c") .free_clusters' 00:31:15.628 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:15.628 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9e5f4003-1374-4539-914d-230444f78c3c") .cluster_size' 00:31:15.885 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:15.885 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:15.885 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:15.885 952320 00:31:15.886 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:16.142 173c9b49-b7d4-4352-8597-6cda2a04827e 00:31:16.142 13:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:16.406 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:16.971 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:17.229 13:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:17.229 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:17.229 fio-3.35 00:31:17.229 Starting 1 thread 00:31:19.755 00:31:19.755 test: (groupid=0, jobs=1): err= 0: pid=353222: Mon Oct 14 13:42:11 2024 00:31:19.755 read: IOPS=5914, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec) 00:31:19.755 slat (nsec): min=1971, max=176489, avg=2664.91, stdev=2446.14 00:31:19.755 clat (usec): min=862, max=171121, avg=11721.77, stdev=11717.94 00:31:19.755 lat (usec): min=866, max=171166, avg=11724.44, stdev=11718.33 00:31:19.755 clat percentiles (msec): 00:31:19.755 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:19.755 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:19.755 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:19.755 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:19.755 | 99.99th=[ 171] 00:31:19.755 bw ( KiB/s): min=16448, max=26152, per=99.92%, avg=23638.00, stdev=4794.06, samples=4 00:31:19.755 iops : min= 4112, max= 6538, avg=5909.50, stdev=1198.51, samples=4 00:31:19.755 write: IOPS=5913, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2008msec); 0 zone resets 00:31:19.755 slat (usec): min=2, max=136, avg= 2.80, stdev= 1.80 00:31:19.755 clat (usec): min=322, max=169081, avg=9739.90, stdev=10961.70 00:31:19.755 lat (usec): min=325, max=169089, avg=9742.70, stdev=10962.08 00:31:19.755 clat percentiles (msec): 00:31:19.755 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:31:19.755 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:19.755 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:31:19.755 | 99.00th=[ 11], 99.50th=[ 18], 99.90th=[ 169], 99.95th=[ 169], 00:31:19.755 | 99.99th=[ 169] 00:31:19.755 bw ( KiB/s): min=17448, max=25800, per=99.81%, avg=23610.00, stdev=4110.37, samples=4 00:31:19.755 iops : min= 4362, max= 6450, avg=5902.50, stdev=1027.59, samples=4 00:31:19.755 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:19.755 lat (msec) : 2=0.03%, 4=0.10%, 10=54.30%, 20=45.00%, 250=0.54% 00:31:19.755 cpu : usr=62.18%, sys=36.47%, ctx=116, majf=0, minf=36 00:31:19.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:19.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:19.755 issued rwts: total=11876,11875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:19.755 00:31:19.755 Run status group 0 (all jobs): 00:31:19.755 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:31:19.755 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2008-2008msec 00:31:19.755 13:42:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:20.012 13:42:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b2968e2f-b8f0-43bf-b88a-421ef09db52c 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b2968e2f-b8f0-43bf-b88a-421ef09db52c 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b2968e2f-b8f0-43bf-b88a-421ef09db52c 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:21.382 13:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:21.382 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:21.382 { 00:31:21.382 "uuid": "9e5f4003-1374-4539-914d-230444f78c3c", 00:31:21.382 "name": "lvs_0", 00:31:21.382 "base_bdev": "Nvme0n1", 00:31:21.382 "total_data_clusters": 930, 00:31:21.382 "free_clusters": 0, 00:31:21.382 "block_size": 512, 00:31:21.382 "cluster_size": 1073741824 00:31:21.382 }, 00:31:21.382 { 00:31:21.382 "uuid": "b2968e2f-b8f0-43bf-b88a-421ef09db52c", 00:31:21.382 "name": "lvs_n_0", 00:31:21.382 "base_bdev": "173c9b49-b7d4-4352-8597-6cda2a04827e", 00:31:21.382 "total_data_clusters": 237847, 00:31:21.382 "free_clusters": 237847, 00:31:21.382 "block_size": 512, 00:31:21.382 "cluster_size": 4194304 00:31:21.382 } 00:31:21.382 ]' 00:31:21.382 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b2968e2f-b8f0-43bf-b88a-421ef09db52c") .free_clusters' 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b2968e2f-b8f0-43bf-b88a-421ef09db52c") .cluster_size' 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:21.640 951388 00:31:21.640 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:22.205 714628e3-8295-4d21-beb1-1996220f6d93 00:31:22.205 13:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:22.463 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:22.721 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:22.979 13:42:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:23.236 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:23.236 fio-3.35 00:31:23.236 Starting 1 thread 00:31:25.761 [2024-10-14 13:42:17.322318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b3a0 is same with the state(6) to be set 00:31:25.761 [2024-10-14 13:42:17.322390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b3a0 is same with the state(6) to be set 00:31:25.761 [2024-10-14 13:42:17.322406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b3a0 is same with the state(6) to be set 00:31:25.761 [2024-10-14 13:42:17.322424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b3a0 is same with the state(6) to be set 00:31:25.761 [2024-10-14 13:42:17.322436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b3a0 is same with the state(6) to be set 00:31:25.761 00:31:25.761 test: (groupid=0, jobs=1): err= 0: pid=353961: Mon Oct 14 13:42:17 2024 00:31:25.761 read: IOPS=5805, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2009msec) 00:31:25.761 slat (usec): min=2, max=145, avg= 2.67, stdev= 2.11 00:31:25.761 clat (usec): min=4563, max=20322, avg=12026.02, stdev=1140.38 00:31:25.761 lat (usec): min=4576, max=20325, avg=12028.70, stdev=1140.25 00:31:25.761 clat percentiles (usec): 00:31:25.761 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:31:25.761 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12387], 00:31:25.761 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13698], 00:31:25.761 | 99.00th=[14484], 99.50th=[14746], 99.90th=[18482], 99.95th=[18744], 00:31:25.761 | 99.99th=[20317] 00:31:25.761 bw ( KiB/s): min=22296, max=23632, per=99.86%, avg=23188.00, stdev=619.76, samples=4 00:31:25.761 iops : min= 5574, max= 5908, avg=5797.00, stdev=154.94, samples=4 00:31:25.761 write: IOPS=5790, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec); 0 zone resets 00:31:25.761 slat (usec): min=2, max=138, avg= 2.78, stdev= 1.58 00:31:25.761 clat (usec): min=2205, max=18307, avg=9925.42, stdev=901.69 00:31:25.761 lat (usec): min=2211, max=18310, avg=9928.20, stdev=901.64 00:31:25.761 clat percentiles (usec): 00:31:25.761 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:25.761 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:31:25.761 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:31:25.761 | 99.00th=[11863], 99.50th=[12256], 99.90th=[15533], 99.95th=[15664], 00:31:25.761 | 99.99th=[18220] 00:31:25.761 bw ( KiB/s): min=23072, max=23184, per=99.95%, avg=23150.00, stdev=52.41, samples=4 00:31:25.761 iops : min= 5768, max= 5796, avg=5787.50, stdev=13.10, samples=4 00:31:25.761 lat (msec) : 4=0.05%, 10=27.98%, 20=71.96%, 50=0.01% 00:31:25.761 cpu : usr=63.84%, sys=34.81%, ctx=122, majf=0, minf=36 00:31:25.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:25.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:25.761 issued rwts: total=11663,11633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:25.761 00:31:25.761 Run status group 0 (all jobs): 00:31:25.761 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2009-2009msec 00:31:25.761 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:31:25.761 13:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:26.019 13:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:26.019 13:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:30.199 13:42:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:30.199 13:42:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:33.475 13:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:33.475 13:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.374 rmmod nvme_tcp 00:31:35.374 rmmod nvme_fabrics 00:31:35.374 rmmod nvme_keyring 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 350498 ']' 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 350498 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 350498 ']' 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 350498 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.374 13:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350498 00:31:35.374 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:35.374 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:35.374 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350498' 00:31:35.374 killing process with pid 350498 00:31:35.374 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 350498 00:31:35.374 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 350498 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.632 13:42:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.542 00:31:37.542 real 0m38.121s 00:31:37.542 user 2m26.686s 00:31:37.542 sys 0m6.755s 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.542 ************************************ 00:31:37.542 END TEST nvmf_fio_host 00:31:37.542 ************************************ 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.542 ************************************ 00:31:37.542 START TEST nvmf_failover 00:31:37.542 ************************************ 00:31:37.542 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:37.542 * Looking for test storage... 00:31:37.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.801 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.802 --rc genhtml_branch_coverage=1 00:31:37.802 --rc genhtml_function_coverage=1 00:31:37.802 --rc genhtml_legend=1 00:31:37.802 --rc geninfo_all_blocks=1 00:31:37.802 --rc geninfo_unexecuted_blocks=1 00:31:37.802 00:31:37.802 ' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.802 --rc genhtml_branch_coverage=1 00:31:37.802 --rc genhtml_function_coverage=1 00:31:37.802 --rc genhtml_legend=1 00:31:37.802 --rc geninfo_all_blocks=1 00:31:37.802 --rc geninfo_unexecuted_blocks=1 00:31:37.802 00:31:37.802 ' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.802 --rc genhtml_branch_coverage=1 00:31:37.802 --rc genhtml_function_coverage=1 00:31:37.802 --rc genhtml_legend=1 00:31:37.802 --rc geninfo_all_blocks=1 00:31:37.802 --rc geninfo_unexecuted_blocks=1 00:31:37.802 00:31:37.802 ' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.802 --rc genhtml_branch_coverage=1 00:31:37.802 --rc genhtml_function_coverage=1 00:31:37.802 --rc genhtml_legend=1 00:31:37.802 --rc geninfo_all_blocks=1 00:31:37.802 --rc geninfo_unexecuted_blocks=1 00:31:37.802 00:31:37.802 ' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:37.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.802 13:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:40.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:40.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:40.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:40.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.340 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:31:40.341 00:31:40.341 --- 10.0.0.2 ping statistics --- 00:31:40.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.341 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:40.341 00:31:40.341 --- 10.0.0.1 ping statistics --- 00:31:40.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.341 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=357330 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 357330 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 357330 ']' 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.341 13:42:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.341 [2024-10-14 13:42:31.866749] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:31:40.341 [2024-10-14 13:42:31.866835] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.341 [2024-10-14 13:42:31.935800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.341 [2024-10-14 13:42:31.981421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.341 [2024-10-14 13:42:31.981493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.341 [2024-10-14 13:42:31.981507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.341 [2024-10-14 13:42:31.981531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.341 [2024-10-14 13:42:31.981540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.341 [2024-10-14 13:42:31.983102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.341 [2024-10-14 13:42:31.983206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.341 [2024-10-14 13:42:31.983210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.341 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:40.599 [2024-10-14 13:42:32.367460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.599 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:40.857 Malloc0 00:31:40.858 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.115 13:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.373 13:42:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.632 [2024-10-14 13:42:33.468864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.889 13:42:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:41.889 [2024-10-14 13:42:33.733626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:42.148 13:42:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:42.148 [2024-10-14 13:42:33.994610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=357620 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 357620 /var/tmp/bdevperf.sock 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 357620 ']' 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.406 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:42.663 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:42.663 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:42.663 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:42.921 NVMe0n1 00:31:43.180 13:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:43.438 00:31:43.438 13:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=357752 00:31:43.438 13:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:43.438 13:42:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:44.373 13:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.630 13:42:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:47.911 13:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:48.170 00:31:48.170 13:42:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:48.429 [2024-10-14 13:42:40.237834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.237912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.237950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.237963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.237975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.237986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 [2024-10-14 13:42:40.238517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ed710 is same with the state(6) to be set 00:31:48.429 13:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:51.714 13:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.714 [2024-10-14 13:42:43.511232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.715 13:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:53.091 13:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:53.091 13:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 357752 00:31:59.663 { 00:31:59.663 "results": [ 00:31:59.663 { 00:31:59.663 "job": "NVMe0n1", 00:31:59.663 "core_mask": "0x1", 00:31:59.663 "workload": "verify", 00:31:59.663 "status": "finished", 00:31:59.663 "verify_range": { 00:31:59.663 "start": 0, 00:31:59.663 "length": 16384 00:31:59.663 }, 00:31:59.663 "queue_depth": 128, 00:31:59.663 "io_size": 4096, 00:31:59.663 "runtime": 15.009443, 00:31:59.663 "iops": 8458.808231591272, 00:31:59.663 "mibps": 33.04221965465341, 00:31:59.663 "io_failed": 7165, 00:31:59.663 "io_timeout": 0, 00:31:59.663 "avg_latency_us": 14295.240115832728, 00:31:59.663 "min_latency_us": 546.1333333333333, 00:31:59.663 "max_latency_us": 17282.085925925927 00:31:59.663 } 00:31:59.663 ], 00:31:59.663 "core_count": 1 00:31:59.663 } 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 357620 ']' 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357620' 00:31:59.663 killing process with pid 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 357620 00:31:59.663 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:59.663 [2024-10-14 13:42:34.061773] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:31:59.663 [2024-10-14 13:42:34.061860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357620 ] 00:31:59.663 [2024-10-14 13:42:34.123588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.663 [2024-10-14 13:42:34.171617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.663 Running I/O for 15 seconds... 00:31:59.663 8388.00 IOPS, 32.77 MiB/s [2024-10-14T11:42:51.516Z] [2024-10-14 13:42:36.370366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.663 [2024-10-14 13:42:36.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.370975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.370989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.663 [2024-10-14 13:42:36.371331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.663 [2024-10-14 13:42:36.371346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.371979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.371995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.664 [2024-10-14 13:42:36.372596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.664 [2024-10-14 13:42:36.372611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.372978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.372994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.665 [2024-10-14 13:42:36.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.665 [2024-10-14 13:42:36.373869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.665 [2024-10-14 13:42:36.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:31:59.665 [2024-10-14 13:42:36.373897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.665 [2024-10-14 13:42:36.373910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.373921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.373932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.373944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.373957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.373968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.373979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.373992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.374953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.374964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.374977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.374989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.375000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.375011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.375040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.666 [2024-10-14 13:42:36.375051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.666 [2024-10-14 13:42:36.375062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:31:59.666 [2024-10-14 13:42:36.375074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.666 [2024-10-14 13:42:36.375153] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140bac0 was disconnected and freed. reset controller. 00:31:59.666 [2024-10-14 13:42:36.375173] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:59.666 [2024-10-14 13:42:36.375208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.667 [2024-10-14 13:42:36.375226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:36.375241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.667 [2024-10-14 13:42:36.375255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:36.375269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.667 [2024-10-14 13:42:36.375282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:36.375295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.667 [2024-10-14 13:42:36.375308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:36.375330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.667 [2024-10-14 13:42:36.375396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ead20 (9): Bad file descriptor 00:31:59.667 [2024-10-14 13:42:36.378627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.667 [2024-10-14 13:42:36.488068] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.667 8008.50 IOPS, 31.28 MiB/s [2024-10-14T11:42:51.520Z] 8197.33 IOPS, 32.02 MiB/s [2024-10-14T11:42:51.520Z] 8302.25 IOPS, 32.43 MiB/s [2024-10-14T11:42:51.520Z] [2024-10-14 13:42:40.239849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.239891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.239918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.239934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.239951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.239965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.239981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.239994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.667 [2024-10-14 13:42:40.240767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.667 [2024-10-14 13:42:40.240786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.240975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.240988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.668 [2024-10-14 13:42:40.241245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.241983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.241999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.242018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.242033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.242047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.668 [2024-10-14 13:42:40.242063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.668 [2024-10-14 13:42:40.242076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.669 [2024-10-14 13:42:40.242416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.242971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.242986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.669 [2024-10-14 13:42:40.243300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.669 [2024-10-14 13:42:40.243313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:40.243342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:40.243370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106712 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106720 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106728 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106736 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106744 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106752 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106760 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106768 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106776 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106784 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106792 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.243965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.243976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.243986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106800 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.243999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.670 [2024-10-14 13:42:40.244022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.670 [2024-10-14 13:42:40.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106200 len:8 PRP1 0x0 PRP2 0x0 00:31:59.670 [2024-10-14 13:42:40.244045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244103] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140db90 was disconnected and freed. reset controller. 00:31:59.670 [2024-10-14 13:42:40.244125] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:59.670 [2024-10-14 13:42:40.244167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.670 [2024-10-14 13:42:40.244192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.670 [2024-10-14 13:42:40.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.670 [2024-10-14 13:42:40.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.670 [2024-10-14 13:42:40.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:40.244287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.670 8328.80 IOPS, 32.53 MiB/s [2024-10-14T11:42:51.523Z] [2024-10-14 13:42:40.247584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.670 [2024-10-14 13:42:40.247624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ead20 (9): Bad file descriptor 00:31:59.670 [2024-10-14 13:42:40.285714] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.670 8287.17 IOPS, 32.37 MiB/s [2024-10-14T11:42:51.523Z] 8336.86 IOPS, 32.57 MiB/s [2024-10-14T11:42:51.523Z] 8381.75 IOPS, 32.74 MiB/s [2024-10-14T11:42:51.523Z] 8432.89 IOPS, 32.94 MiB/s [2024-10-14T11:42:51.523Z] [2024-10-14 13:42:44.785630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.670 [2024-10-14 13:42:44.785946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.670 [2024-10-14 13:42:44.785959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.785974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.785988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.786016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.786045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.786089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.786117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.671 [2024-10-14 13:42:44.786622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.786985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.786999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.671 [2024-10-14 13:42:44.787012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.671 [2024-10-14 13:42:44.787027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.787997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.672 [2024-10-14 13:42:44.788238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.672 [2024-10-14 13:42:44.788252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.673 [2024-10-14 13:42:44.788780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.673 [2024-10-14 13:42:44.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.788981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.788996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.673 [2024-10-14 13:42:44.789495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.673 [2024-10-14 13:42:44.789510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140dd70 is same with the state(6) to be set 00:31:59.674 [2024-10-14 13:42:44.789527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.674 [2024-10-14 13:42:44.789539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.674 [2024-10-14 13:42:44.789550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35688 len:8 PRP1 0x0 PRP2 0x0 00:31:59.674 [2024-10-14 13:42:44.789562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.674 [2024-10-14 13:42:44.789624] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140dd70 was disconnected and freed. reset controller. 00:31:59.674 [2024-10-14 13:42:44.789643] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:59.674 [2024-10-14 13:42:44.789676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.674 [2024-10-14 13:42:44.789694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.674 [2024-10-14 13:42:44.789709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.674 [2024-10-14 13:42:44.789722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.674 [2024-10-14 13:42:44.789736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.674 [2024-10-14 13:42:44.789748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.674 [2024-10-14 13:42:44.789761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.674 [2024-10-14 13:42:44.789774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.674 [2024-10-14 13:42:44.789787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.674 [2024-10-14 13:42:44.793052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.674 [2024-10-14 13:42:44.793090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ead20 (9): Bad file descriptor 00:31:59.674 [2024-10-14 13:42:44.831153] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.674 8411.80 IOPS, 32.86 MiB/s [2024-10-14T11:42:51.527Z] 8413.82 IOPS, 32.87 MiB/s [2024-10-14T11:42:51.527Z] 8425.00 IOPS, 32.91 MiB/s [2024-10-14T11:42:51.527Z] 8438.38 IOPS, 32.96 MiB/s [2024-10-14T11:42:51.527Z] 8454.14 IOPS, 33.02 MiB/s 00:31:59.674 Latency(us) 00:31:59.674 [2024-10-14T11:42:51.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.674 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:59.674 Verification LBA range: start 0x0 length 0x4000 00:31:59.674 NVMe0n1 : 15.01 8458.81 33.04 477.37 0.00 14295.24 546.13 17282.09 00:31:59.674 [2024-10-14T11:42:51.527Z] =================================================================================================================== 00:31:59.674 [2024-10-14T11:42:51.527Z] Total : 8458.81 33.04 477.37 0.00 14295.24 546.13 17282.09 00:31:59.674 Received shutdown signal, test time was about 15.000000 seconds 00:31:59.674 00:31:59.674 Latency(us) 00:31:59.674 [2024-10-14T11:42:51.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.674 [2024-10-14T11:42:51.527Z] =================================================================================================================== 00:31:59.674 [2024-10-14T11:42:51.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=359482 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 359482 /var/tmp/bdevperf.sock 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 359482 ']' 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:59.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:59.674 13:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:59.674 [2024-10-14 13:42:51.036520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:59.674 13:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:59.674 [2024-10-14 13:42:51.305289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:59.674 13:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:59.932 NVMe0n1 00:31:59.932 13:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:00.497 00:32:00.497 13:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:00.755 00:32:00.755 13:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:00.755 13:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:01.013 13:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.270 13:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:04.549 13:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:04.549 13:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:04.550 13:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=360146 00:32:04.550 13:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.550 13:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 360146 00:32:05.922 { 00:32:05.922 "results": [ 00:32:05.922 { 00:32:05.922 "job": "NVMe0n1", 00:32:05.922 "core_mask": "0x1", 00:32:05.922 "workload": "verify", 00:32:05.922 "status": "finished", 00:32:05.922 "verify_range": { 00:32:05.922 "start": 0, 00:32:05.922 "length": 16384 00:32:05.922 }, 00:32:05.922 "queue_depth": 128, 00:32:05.922 "io_size": 4096, 00:32:05.922 "runtime": 1.010616, 00:32:05.922 "iops": 8641.26433779002, 00:32:05.922 "mibps": 33.75493881949227, 00:32:05.922 "io_failed": 0, 00:32:05.922 "io_timeout": 0, 00:32:05.922 "avg_latency_us": 14741.447560593915, 00:32:05.922 "min_latency_us": 2936.9837037037037, 00:32:05.922 "max_latency_us": 15922.82074074074 00:32:05.922 } 00:32:05.922 ], 00:32:05.922 "core_count": 1 00:32:05.922 } 00:32:05.923 13:42:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:05.923 [2024-10-14 13:42:50.547486] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:32:05.923 [2024-10-14 13:42:50.547600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359482 ] 00:32:05.923 [2024-10-14 13:42:50.612098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.923 [2024-10-14 13:42:50.658550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.923 [2024-10-14 13:42:52.928904] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:05.923 [2024-10-14 13:42:52.928994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.923 [2024-10-14 13:42:52.929018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.923 [2024-10-14 13:42:52.929050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.923 [2024-10-14 13:42:52.929064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.923 [2024-10-14 13:42:52.929078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.923 [2024-10-14 13:42:52.929092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.923 [2024-10-14 13:42:52.929106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.923 [2024-10-14 13:42:52.929142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.923 [2024-10-14 13:42:52.929158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.923 [2024-10-14 13:42:52.929207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.923 [2024-10-14 13:42:52.929238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc10d20 (9): Bad file descriptor 00:32:05.923 [2024-10-14 13:42:53.061255] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:05.923 Running I/O for 1 seconds... 00:32:05.923 8605.00 IOPS, 33.61 MiB/s 00:32:05.923 Latency(us) 00:32:05.923 [2024-10-14T11:42:57.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.923 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:05.923 Verification LBA range: start 0x0 length 0x4000 00:32:05.923 NVMe0n1 : 1.01 8641.26 33.75 0.00 0.00 14741.45 2936.98 15922.82 00:32:05.923 [2024-10-14T11:42:57.776Z] =================================================================================================================== 00:32:05.923 [2024-10-14T11:42:57.776Z] Total : 8641.26 33.75 0.00 0.00 14741.45 2936.98 15922.82 00:32:05.923 13:42:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:05.923 13:42:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:05.923 13:42:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.488 13:42:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:06.488 13:42:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:06.488 13:42:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:07.053 13:42:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 359482 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 359482 ']' 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 359482 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 359482 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 359482' 00:32:10.335 killing process with pid 359482 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 359482 00:32:10.335 13:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 359482 00:32:10.335 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:10.335 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:10.592 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:10.592 rmmod nvme_tcp 00:32:10.850 rmmod nvme_fabrics 00:32:10.850 rmmod nvme_keyring 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 357330 ']' 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 357330 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 357330 ']' 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 357330 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357330 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357330' 00:32:10.850 killing process with pid 357330 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 357330 00:32:10.850 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 357330 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.110 13:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.017 00:32:13.017 real 0m35.442s 00:32:13.017 user 2m5.020s 00:32:13.017 sys 0m5.953s 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 ************************************ 00:32:13.017 END TEST nvmf_failover 00:32:13.017 ************************************ 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.017 ************************************ 00:32:13.017 START TEST nvmf_host_discovery 00:32:13.017 ************************************ 00:32:13.017 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:13.276 * Looking for test storage... 00:32:13.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:13.276 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:13.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.277 --rc genhtml_branch_coverage=1 00:32:13.277 --rc genhtml_function_coverage=1 00:32:13.277 --rc genhtml_legend=1 00:32:13.277 --rc geninfo_all_blocks=1 00:32:13.277 --rc geninfo_unexecuted_blocks=1 00:32:13.277 00:32:13.277 ' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:13.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.277 --rc genhtml_branch_coverage=1 00:32:13.277 --rc genhtml_function_coverage=1 00:32:13.277 --rc genhtml_legend=1 00:32:13.277 --rc geninfo_all_blocks=1 00:32:13.277 --rc geninfo_unexecuted_blocks=1 00:32:13.277 00:32:13.277 ' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:13.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.277 --rc genhtml_branch_coverage=1 00:32:13.277 --rc genhtml_function_coverage=1 00:32:13.277 --rc genhtml_legend=1 00:32:13.277 --rc geninfo_all_blocks=1 00:32:13.277 --rc geninfo_unexecuted_blocks=1 00:32:13.277 00:32:13.277 ' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:13.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.277 --rc genhtml_branch_coverage=1 00:32:13.277 --rc genhtml_function_coverage=1 00:32:13.277 --rc genhtml_legend=1 00:32:13.277 --rc geninfo_all_blocks=1 00:32:13.277 --rc geninfo_unexecuted_blocks=1 00:32:13.277 00:32:13.277 ' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:13.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:13.277 13:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:15.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:15.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:15.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:15.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.809 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:32:15.810 00:32:15.810 --- 10.0.0.2 ping statistics --- 00:32:15.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.810 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:32:15.810 00:32:15.810 --- 10.0.0.1 ping statistics --- 00:32:15.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.810 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=362871 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 362871 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 362871 ']' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 [2024-10-14 13:43:07.302258] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:32:15.810 [2024-10-14 13:43:07.302348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.810 [2024-10-14 13:43:07.365419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.810 [2024-10-14 13:43:07.406789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.810 [2024-10-14 13:43:07.406848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.810 [2024-10-14 13:43:07.406875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.810 [2024-10-14 13:43:07.406886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.810 [2024-10-14 13:43:07.406895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.810 [2024-10-14 13:43:07.407518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 [2024-10-14 13:43:07.547331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 [2024-10-14 13:43:07.555561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 null0 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 null1 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=362896 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 362896 /tmp/host.sock 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 362896 ']' 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:15.810 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.810 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.810 [2024-10-14 13:43:07.628521] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:32:15.810 [2024-10-14 13:43:07.628599] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362896 ] 00:32:16.068 [2024-10-14 13:43:07.687542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.068 [2024-10-14 13:43:07.734090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.068 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.328 13:43:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 [2024-10-14 13:43:08.173163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.328 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:16.588 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:16.589 13:43:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:17.154 [2024-10-14 13:43:08.894208] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:17.154 [2024-10-14 13:43:08.894239] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:17.154 [2024-10-14 13:43:08.894265] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:17.154 [2024-10-14 13:43:08.980574] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:17.412 [2024-10-14 13:43:09.077964] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:17.412 [2024-10-14 13:43:09.077987] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:17.670 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:17.671 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 [2024-10-14 13:43:09.773782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.929 [2024-10-14 13:43:09.774838] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:17.929 [2024-10-14 13:43:09.774885] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.929 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.188 [2024-10-14 13:43:09.861590] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:18.188 13:43:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:18.188 [2024-10-14 13:43:09.920389] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:18.188 [2024-10-14 13:43:09.920413] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:18.188 [2024-10-14 13:43:09.920422] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.121 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.380 [2024-10-14 13:43:10.993711] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:19.380 [2024-10-14 13:43:10.993752] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:19.380 [2024-10-14 13:43:10.998135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.380 [2024-10-14 13:43:10.998185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.380 [2024-10-14 13:43:10.998202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.380 [2024-10-14 13:43:10.998215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.380 [2024-10-14 13:43:10.998235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.380 [2024-10-14 13:43:10.998253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.380 [2024-10-14 13:43:10.998267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:19.380 [2024-10-14 13:43:10.998280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.380 [2024-10-14 13:43:10.998299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.380 13:43:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.380 [2024-10-14 13:43:11.008106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.380 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.380 [2024-10-14 13:43:11.018157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.380 [2024-10-14 13:43:11.018352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.380 [2024-10-14 13:43:11.018382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.380 [2024-10-14 13:43:11.018400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.380 [2024-10-14 13:43:11.018432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.380 [2024-10-14 13:43:11.018454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.380 [2024-10-14 13:43:11.018468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.380 [2024-10-14 13:43:11.018486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.380 [2024-10-14 13:43:11.018506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.380 [2024-10-14 13:43:11.028239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.380 [2024-10-14 13:43:11.028377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.380 [2024-10-14 13:43:11.028406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.380 [2024-10-14 13:43:11.028429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.380 [2024-10-14 13:43:11.028466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.380 [2024-10-14 13:43:11.028497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.028514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.028527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.028546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:19.381 [2024-10-14 13:43:11.038329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:19.381 [2024-10-14 13:43:11.038558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.038588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.038604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.038627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.038648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.038662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.038674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.038694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.381 [2024-10-14 13:43:11.048408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.048661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.048690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.048706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.048736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.048783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.048802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.048815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.048835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 [2024-10-14 13:43:11.058519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.058732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.058760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.058777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.058799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.058825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.058840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.058853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.058873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.381 [2024-10-14 13:43:11.068606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.068840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.068868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.068884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.068906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.068938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.068955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.068968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.068987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:19.381 [2024-10-14 13:43:11.078693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.078895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.078923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.078940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.078962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.078983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.078997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.079010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.079029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:19.381 [2024-10-14 13:43:11.088767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.088980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.089009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.089025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.089047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.089085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.089102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.089116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.089145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.381 [2024-10-14 13:43:11.098837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.099040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.099068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.099085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.099106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.099136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.099152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.099166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.099185] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 [2024-10-14 13:43:11.108919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.109044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.381 [2024-10-14 13:43:11.109072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.381 [2024-10-14 13:43:11.109088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.381 [2024-10-14 13:43:11.109110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.381 [2024-10-14 13:43:11.109167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.381 [2024-10-14 13:43:11.109185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.381 [2024-10-14 13:43:11.109203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.381 [2024-10-14 13:43:11.109224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:19.381 13:43:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:19.381 [2024-10-14 13:43:11.118988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.381 [2024-10-14 13:43:11.119211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:19.382 [2024-10-14 13:43:11.119239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf79c0 with addr=10.0.0.2, port=4420 00:32:19.382 [2024-10-14 13:43:11.119255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf79c0 is same with the state(6) to be set 00:32:19.382 [2024-10-14 13:43:11.119277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf79c0 (9): Bad file descriptor 00:32:19.382 [2024-10-14 13:43:11.119297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.382 [2024-10-14 13:43:11.119310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.382 [2024-10-14 13:43:11.119323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.382 [2024-10-14 13:43:11.119341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.382 [2024-10-14 13:43:11.119603] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:19.382 [2024-10-14 13:43:11.119628] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.315 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.574 13:43:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.949 [2024-10-14 13:43:13.396818] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:21.949 [2024-10-14 13:43:13.396843] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:21.949 [2024-10-14 13:43:13.396865] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:21.949 [2024-10-14 13:43:13.524287] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:22.208 [2024-10-14 13:43:13.834714] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:22.208 [2024-10-14 13:43:13.834752] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 request: 00:32:22.208 { 00:32:22.208 "name": "nvme", 00:32:22.208 "trtype": "tcp", 00:32:22.208 "traddr": "10.0.0.2", 00:32:22.208 "adrfam": "ipv4", 00:32:22.208 "trsvcid": "8009", 00:32:22.208 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:22.208 "wait_for_attach": true, 00:32:22.208 "method": "bdev_nvme_start_discovery", 00:32:22.208 "req_id": 1 00:32:22.208 } 00:32:22.208 Got JSON-RPC error response 00:32:22.208 response: 00:32:22.208 { 00:32:22.208 "code": -17, 00:32:22.208 "message": "File exists" 00:32:22.208 } 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 request: 00:32:22.208 { 00:32:22.208 "name": "nvme_second", 00:32:22.208 "trtype": "tcp", 00:32:22.208 "traddr": "10.0.0.2", 00:32:22.208 "adrfam": "ipv4", 00:32:22.208 "trsvcid": "8009", 00:32:22.208 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:22.208 "wait_for_attach": true, 00:32:22.208 "method": "bdev_nvme_start_discovery", 00:32:22.208 "req_id": 1 00:32:22.208 } 00:32:22.208 Got JSON-RPC error response 00:32:22.208 response: 00:32:22.208 { 00:32:22.208 "code": -17, 00:32:22.208 "message": "File exists" 00:32:22.208 } 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:22.208 13:43:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.208 13:43:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.582 [2024-10-14 13:43:15.042235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.582 [2024-10-14 13:43:15.042302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2de70 with addr=10.0.0.2, port=8010 00:32:23.582 [2024-10-14 13:43:15.042337] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:23.582 [2024-10-14 13:43:15.042353] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:23.582 [2024-10-14 13:43:15.042366] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:24.516 [2024-10-14 13:43:16.044679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.516 [2024-10-14 13:43:16.044733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2de70 with addr=10.0.0.2, port=8010 00:32:24.516 [2024-10-14 13:43:16.044766] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:24.516 [2024-10-14 13:43:16.044781] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:24.516 [2024-10-14 13:43:16.044795] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:25.450 [2024-10-14 13:43:17.046814] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:25.450 request: 00:32:25.450 { 00:32:25.450 "name": "nvme_second", 00:32:25.450 "trtype": "tcp", 00:32:25.450 "traddr": "10.0.0.2", 00:32:25.450 "adrfam": "ipv4", 00:32:25.450 "trsvcid": "8010", 00:32:25.450 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:25.450 "wait_for_attach": false, 00:32:25.450 "attach_timeout_ms": 3000, 00:32:25.450 "method": "bdev_nvme_start_discovery", 00:32:25.450 "req_id": 1 00:32:25.450 } 00:32:25.450 Got JSON-RPC error response 00:32:25.450 response: 00:32:25.450 { 00:32:25.450 "code": -110, 00:32:25.450 "message": "Connection timed out" 00:32:25.450 } 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 362896 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.450 rmmod nvme_tcp 00:32:25.450 rmmod nvme_fabrics 00:32:25.450 rmmod nvme_keyring 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 362871 ']' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 362871 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 362871 ']' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 362871 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362871 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362871' 00:32:25.450 killing process with pid 362871 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 362871 00:32:25.450 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 362871 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.709 13:43:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.623 00:32:27.623 real 0m14.583s 00:32:27.623 user 0m21.568s 00:32:27.623 sys 0m3.017s 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.623 ************************************ 00:32:27.623 END TEST nvmf_host_discovery 00:32:27.623 ************************************ 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.623 ************************************ 00:32:27.623 START TEST nvmf_host_multipath_status 00:32:27.623 ************************************ 00:32:27.623 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:27.882 * Looking for test storage... 00:32:27.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.882 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:27.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.883 --rc genhtml_branch_coverage=1 00:32:27.883 --rc genhtml_function_coverage=1 00:32:27.883 --rc genhtml_legend=1 00:32:27.883 --rc geninfo_all_blocks=1 00:32:27.883 --rc geninfo_unexecuted_blocks=1 00:32:27.883 00:32:27.883 ' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:27.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.883 --rc genhtml_branch_coverage=1 00:32:27.883 --rc genhtml_function_coverage=1 00:32:27.883 --rc genhtml_legend=1 00:32:27.883 --rc geninfo_all_blocks=1 00:32:27.883 --rc geninfo_unexecuted_blocks=1 00:32:27.883 00:32:27.883 ' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:27.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.883 --rc genhtml_branch_coverage=1 00:32:27.883 --rc genhtml_function_coverage=1 00:32:27.883 --rc genhtml_legend=1 00:32:27.883 --rc geninfo_all_blocks=1 00:32:27.883 --rc geninfo_unexecuted_blocks=1 00:32:27.883 00:32:27.883 ' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:27.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.883 --rc genhtml_branch_coverage=1 00:32:27.883 --rc genhtml_function_coverage=1 00:32:27.883 --rc genhtml_legend=1 00:32:27.883 --rc geninfo_all_blocks=1 00:32:27.883 --rc geninfo_unexecuted_blocks=1 00:32:27.883 00:32:27.883 ' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.883 13:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.416 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:30.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:30.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:30.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:30.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:32:30.417 00:32:30.417 --- 10.0.0.2 ping statistics --- 00:32:30.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.417 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:32:30.417 00:32:30.417 --- 10.0.0.1 ping statistics --- 00:32:30.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.417 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=366189 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 366189 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 366189 ']' 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.417 13:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.417 [2024-10-14 13:43:22.017766] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:32:30.417 [2024-10-14 13:43:22.017851] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.417 [2024-10-14 13:43:22.082759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:30.417 [2024-10-14 13:43:22.127003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.417 [2024-10-14 13:43:22.127062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.418 [2024-10-14 13:43:22.127090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.418 [2024-10-14 13:43:22.127100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.418 [2024-10-14 13:43:22.127109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.418 [2024-10-14 13:43:22.128589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.418 [2024-10-14 13:43:22.128594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=366189 00:32:30.418 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:30.983 [2024-10-14 13:43:22.562858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.983 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:31.242 Malloc0 00:32:31.242 13:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:31.500 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:31.758 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.016 [2024-10-14 13:43:23.694474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.016 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.275 [2024-10-14 13:43:23.971206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=366472 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 366472 /var/tmp/bdevperf.sock 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 366472 ']' 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:32.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.275 13:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:32.534 13:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:32.534 13:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:32.534 13:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:32.792 13:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:33.357 Nvme0n1 00:32:33.357 13:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:33.923 Nvme0n1 00:32:33.923 13:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:33.923 13:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:35.822 13:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:35.822 13:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:36.080 13:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:36.337 13:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.712 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:37.970 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:37.970 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:37.970 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.970 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:38.228 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.228 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:38.228 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.228 13:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:38.486 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.486 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:38.487 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.487 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:38.745 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.745 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:38.745 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.745 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:39.003 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.003 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:39.003 13:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:39.261 13:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:39.519 13:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.893 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:41.151 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.151 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:41.151 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.151 13:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:41.409 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.409 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:41.409 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.409 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:41.667 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.667 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:41.667 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.667 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:41.925 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.925 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:41.925 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.925 13:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:42.183 13:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.183 13:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:42.183 13:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:42.750 13:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:42.750 13:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.123 13:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.381 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.381 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.381 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.381 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:44.639 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.639 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:44.639 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.639 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:44.898 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.898 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:44.898 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.898 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:45.156 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.156 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:45.156 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.156 13:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:45.413 13:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.413 13:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:45.413 13:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:45.671 13:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:45.929 13:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:47.303 13:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:47.303 13:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:47.303 13:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.303 13:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.303 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.303 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:47.303 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.303 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:47.561 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.561 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:47.561 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.561 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:47.819 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.819 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:47.819 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.819 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:48.078 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.078 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:48.078 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.078 13:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.336 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.336 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:48.336 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.336 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:48.902 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.902 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:48.902 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:48.902 13:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:49.160 13:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.534 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:50.792 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.792 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:50.792 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.792 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.058 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.058 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.058 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.058 13:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.323 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.323 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:51.323 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.323 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:51.581 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.581 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:51.581 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.581 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:51.839 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.839 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:51.839 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:52.098 13:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:52.356 13:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.731 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:53.989 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.989 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:53.989 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.989 13:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.247 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.247 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.247 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.247 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.505 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.505 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:54.505 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.505 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:54.763 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.763 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:54.763 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.763 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.021 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.021 13:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:55.280 13:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:55.280 13:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:55.537 13:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.103 13:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:57.037 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:57.037 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:57.037 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.037 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.295 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.295 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:57.295 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.295 13:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.553 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.553 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.553 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.553 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.811 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.811 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.811 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.811 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.069 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.069 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:58.069 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.069 13:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.327 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.327 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.327 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.327 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.585 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.585 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:58.585 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.842 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.100 13:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:00.034 13:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:00.035 13:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.035 13:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.035 13:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.600 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.858 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.858 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.117 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.117 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.375 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.375 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.375 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.375 13:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.633 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.633 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.633 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.633 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.891 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.891 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:01.891 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.148 13:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:02.406 13:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:03.339 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:03.339 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:03.339 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.339 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.597 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.597 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:03.597 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.597 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:03.855 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.855 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:03.855 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.855 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.114 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.114 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.114 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.114 13:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.372 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.372 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.372 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.372 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:04.630 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.630 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:04.630 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.630 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.194 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.194 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:05.194 13:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.194 13:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:05.452 13:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.825 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.083 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.083 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.083 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.083 13:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.341 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.341 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.341 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.341 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.599 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.599 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.599 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.599 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.857 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.857 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:07.857 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.857 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 366472 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 366472 ']' 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 366472 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.115 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 366472 00:33:08.377 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:08.377 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:08.377 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 366472' 00:33:08.377 killing process with pid 366472 00:33:08.377 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 366472 00:33:08.377 13:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 366472 00:33:08.377 { 00:33:08.377 "results": [ 00:33:08.377 { 00:33:08.377 "job": "Nvme0n1", 00:33:08.377 "core_mask": "0x4", 00:33:08.377 "workload": "verify", 00:33:08.377 "status": "terminated", 00:33:08.377 "verify_range": { 00:33:08.377 "start": 0, 00:33:08.377 "length": 16384 00:33:08.377 }, 00:33:08.377 "queue_depth": 128, 00:33:08.377 "io_size": 4096, 00:33:08.377 "runtime": 34.225537, 00:33:08.377 "iops": 7988.800876959213, 00:33:08.377 "mibps": 31.206253425621927, 00:33:08.377 "io_failed": 0, 00:33:08.377 "io_timeout": 0, 00:33:08.377 "avg_latency_us": 15995.59759661908, 00:33:08.377 "min_latency_us": 248.79407407407408, 00:33:08.377 "max_latency_us": 4026531.84 00:33:08.377 } 00:33:08.377 ], 00:33:08.377 "core_count": 1 00:33:08.377 } 00:33:08.377 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 366472 00:33:08.377 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:08.377 [2024-10-14 13:43:24.037277] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:33:08.377 [2024-10-14 13:43:24.037360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366472 ] 00:33:08.377 [2024-10-14 13:43:24.095666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.377 [2024-10-14 13:43:24.142181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:08.377 Running I/O for 90 seconds... 00:33:08.377 8485.00 IOPS, 33.14 MiB/s [2024-10-14T11:44:00.230Z] 8502.00 IOPS, 33.21 MiB/s [2024-10-14T11:44:00.230Z] 8512.33 IOPS, 33.25 MiB/s [2024-10-14T11:44:00.230Z] 8483.25 IOPS, 33.14 MiB/s [2024-10-14T11:44:00.230Z] 8473.20 IOPS, 33.10 MiB/s [2024-10-14T11:44:00.230Z] 8456.33 IOPS, 33.03 MiB/s [2024-10-14T11:44:00.230Z] 8458.71 IOPS, 33.04 MiB/s [2024-10-14T11:44:00.230Z] 8464.62 IOPS, 33.06 MiB/s [2024-10-14T11:44:00.230Z] 8467.00 IOPS, 33.07 MiB/s [2024-10-14T11:44:00.230Z] 8470.30 IOPS, 33.09 MiB/s [2024-10-14T11:44:00.230Z] 8474.09 IOPS, 33.10 MiB/s [2024-10-14T11:44:00.230Z] 8476.50 IOPS, 33.11 MiB/s [2024-10-14T11:44:00.230Z] 8477.62 IOPS, 33.12 MiB/s [2024-10-14T11:44:00.230Z] 8471.79 IOPS, 33.09 MiB/s [2024-10-14T11:44:00.230Z] [2024-10-14 13:43:40.709048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:08.377 [2024-10-14 13:43:40.709845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.377 [2024-10-14 13:43:40.709861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.709898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.709914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.710889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.710911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.710938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.710955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.710978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.710995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.711050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.711091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.711159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.711202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.378 [2024-10-14 13:43:40.711934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.711973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.711996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.378 [2024-10-14 13:43:40.712636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:08.378 [2024-10-14 13:43:40.712659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.712973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.712989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.379 [2024-10-14 13:43:40.713794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.713963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.713979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.379 [2024-10-14 13:43:40.714638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.379 [2024-10-14 13:43:40.714664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.714960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.714976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:40.715658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:40.715705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.715998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:40.716024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.380 [2024-10-14 13:43:40.716040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.380 8472.87 IOPS, 33.10 MiB/s [2024-10-14T11:44:00.233Z] 7943.31 IOPS, 31.03 MiB/s [2024-10-14T11:44:00.233Z] 7476.06 IOPS, 29.20 MiB/s [2024-10-14T11:44:00.233Z] 7060.72 IOPS, 27.58 MiB/s [2024-10-14T11:44:00.233Z] 6690.95 IOPS, 26.14 MiB/s [2024-10-14T11:44:00.233Z] 6773.80 IOPS, 26.46 MiB/s [2024-10-14T11:44:00.233Z] 6858.33 IOPS, 26.79 MiB/s [2024-10-14T11:44:00.233Z] 6973.09 IOPS, 27.24 MiB/s [2024-10-14T11:44:00.233Z] 7162.96 IOPS, 27.98 MiB/s [2024-10-14T11:44:00.233Z] 7334.00 IOPS, 28.65 MiB/s [2024-10-14T11:44:00.233Z] 7484.80 IOPS, 29.24 MiB/s [2024-10-14T11:44:00.233Z] 7526.42 IOPS, 29.40 MiB/s [2024-10-14T11:44:00.233Z] 7557.63 IOPS, 29.52 MiB/s [2024-10-14T11:44:00.233Z] 7590.93 IOPS, 29.65 MiB/s [2024-10-14T11:44:00.233Z] 7679.38 IOPS, 30.00 MiB/s [2024-10-14T11:44:00.233Z] 7802.67 IOPS, 30.48 MiB/s [2024-10-14T11:44:00.233Z] 7900.48 IOPS, 30.86 MiB/s [2024-10-14T11:44:00.233Z] [2024-10-14 13:43:57.259968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.380 [2024-10-14 13:43:57.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:08.380 [2024-10-14 13:43:57.260341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.260841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.260881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.260922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.260963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.260986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.261003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.261026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.261044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.261067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.261084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.261108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.261143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.261170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.263610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.263666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.381 [2024-10-14 13:43:57.263706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:08.381 [2024-10-14 13:43:57.263729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.381 [2024-10-14 13:43:57.263746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.263787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.263826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.263865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.263960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.263983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.263999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.264153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.264199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.264239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.264280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.382 [2024-10-14 13:43:57.264490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:08.382 [2024-10-14 13:43:57.264553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:08.382 [2024-10-14 13:43:57.264569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:08.382 7953.12 IOPS, 31.07 MiB/s [2024-10-14T11:44:00.235Z] 7969.70 IOPS, 31.13 MiB/s [2024-10-14T11:44:00.235Z] 7988.68 IOPS, 31.21 MiB/s [2024-10-14T11:44:00.235Z] Received shutdown signal, test time was about 34.226314 seconds 00:33:08.382 00:33:08.382 Latency(us) 00:33:08.382 [2024-10-14T11:44:00.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.382 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:08.382 Verification LBA range: start 0x0 length 0x4000 00:33:08.382 Nvme0n1 : 34.23 7988.80 31.21 0.00 0.00 15995.60 248.79 4026531.84 00:33:08.382 [2024-10-14T11:44:00.235Z] =================================================================================================================== 00:33:08.382 [2024-10-14T11:44:00.235Z] Total : 7988.80 31.21 0.00 0.00 15995.60 248.79 4026531.84 00:33:08.382 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.640 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.640 rmmod nvme_tcp 00:33:08.640 rmmod nvme_fabrics 00:33:08.898 rmmod nvme_keyring 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 366189 ']' 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 366189 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 366189 ']' 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 366189 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 366189 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 366189' 00:33:08.898 killing process with pid 366189 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 366189 00:33:08.898 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 366189 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.164 13:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.068 00:33:11.068 real 0m43.360s 00:33:11.068 user 2m11.440s 00:33:11.068 sys 0m11.164s 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 ************************************ 00:33:11.068 END TEST nvmf_host_multipath_status 00:33:11.068 ************************************ 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.068 ************************************ 00:33:11.068 START TEST nvmf_discovery_remove_ifc 00:33:11.068 ************************************ 00:33:11.068 13:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:11.327 * Looking for test storage... 00:33:11.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.327 13:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:11.327 13:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:33:11.327 13:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.327 --rc genhtml_branch_coverage=1 00:33:11.327 --rc genhtml_function_coverage=1 00:33:11.327 --rc genhtml_legend=1 00:33:11.327 --rc geninfo_all_blocks=1 00:33:11.327 --rc geninfo_unexecuted_blocks=1 00:33:11.327 00:33:11.327 ' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.327 --rc genhtml_branch_coverage=1 00:33:11.327 --rc genhtml_function_coverage=1 00:33:11.327 --rc genhtml_legend=1 00:33:11.327 --rc geninfo_all_blocks=1 00:33:11.327 --rc geninfo_unexecuted_blocks=1 00:33:11.327 00:33:11.327 ' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.327 --rc genhtml_branch_coverage=1 00:33:11.327 --rc genhtml_function_coverage=1 00:33:11.327 --rc genhtml_legend=1 00:33:11.327 --rc geninfo_all_blocks=1 00:33:11.327 --rc geninfo_unexecuted_blocks=1 00:33:11.327 00:33:11.327 ' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.327 --rc genhtml_branch_coverage=1 00:33:11.327 --rc genhtml_function_coverage=1 00:33:11.327 --rc genhtml_legend=1 00:33:11.327 --rc geninfo_all_blocks=1 00:33:11.327 --rc geninfo_unexecuted_blocks=1 00:33:11.327 00:33:11.327 ' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.327 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.328 13:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.231 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:13.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:13.490 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:13.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:13.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:33:13.490 00:33:13.490 --- 10.0.0.2 ping statistics --- 00:33:13.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.490 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:33:13.490 00:33:13.490 --- 10.0.0.1 ping statistics --- 00:33:13.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.490 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=372815 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:13.490 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 372815 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 372815 ']' 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.491 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.491 [2024-10-14 13:44:05.297675] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:33:13.491 [2024-10-14 13:44:05.297747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.748 [2024-10-14 13:44:05.364019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.748 [2024-10-14 13:44:05.409574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.748 [2024-10-14 13:44:05.409632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.748 [2024-10-14 13:44:05.409655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.748 [2024-10-14 13:44:05.409665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.748 [2024-10-14 13:44:05.409674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.748 [2024-10-14 13:44:05.410298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.748 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.748 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.749 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.749 [2024-10-14 13:44:05.551834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.749 [2024-10-14 13:44:05.560010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:13.749 null0 00:33:13.749 [2024-10-14 13:44:05.591973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=372845 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 372845 /tmp/host.sock 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 372845 ']' 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:14.007 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.007 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.007 [2024-10-14 13:44:05.657418] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:33:14.007 [2024-10-14 13:44:05.657498] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372845 ] 00:33:14.007 [2024-10-14 13:44:05.714479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.007 [2024-10-14 13:44:05.764767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.266 13:44:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.198 [2024-10-14 13:44:07.049830] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:15.198 [2024-10-14 13:44:07.049860] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:15.198 [2024-10-14 13:44:07.049883] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:15.456 [2024-10-14 13:44:07.180356] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:15.717 [2024-10-14 13:44:07.362261] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:15.717 [2024-10-14 13:44:07.362332] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:15.717 [2024-10-14 13:44:07.362369] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:15.717 [2024-10-14 13:44:07.362393] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:15.717 [2024-10-14 13:44:07.362446] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.717 [2024-10-14 13:44:07.368326] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15d13d0 was disconnected and freed. delete nvme_qpair. 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:15.717 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:15.975 13:44:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.908 13:44:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.842 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.100 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:18.100 13:44:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:19.033 13:44:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:19.966 13:44:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.339 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.339 [2024-10-14 13:44:12.803768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:21.339 [2024-10-14 13:44:12.803841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.339 [2024-10-14 13:44:12.803872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.339 [2024-10-14 13:44:12.803892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.339 [2024-10-14 13:44:12.803905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.339 [2024-10-14 13:44:12.803918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.339 [2024-10-14 13:44:12.803930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.339 [2024-10-14 13:44:12.803943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.340 [2024-10-14 13:44:12.803955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.340 [2024-10-14 13:44:12.803968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.340 [2024-10-14 13:44:12.803980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.340 [2024-10-14 13:44:12.803993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15adc80 is same with the state(6) to be set 00:33:21.340 [2024-10-14 13:44:12.813789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adc80 (9): Bad file descriptor 00:33:21.340 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.340 [2024-10-14 13:44:12.823836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.340 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:21.340 13:44:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.274 [2024-10-14 13:44:13.829171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:22.274 [2024-10-14 13:44:13.829225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15adc80 with addr=10.0.0.2, port=4420 00:33:22.274 [2024-10-14 13:44:13.829250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15adc80 is same with the state(6) to be set 00:33:22.274 [2024-10-14 13:44:13.829291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adc80 (9): Bad file descriptor 00:33:22.274 [2024-10-14 13:44:13.829363] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:22.274 [2024-10-14 13:44:13.829403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:22.274 [2024-10-14 13:44:13.829434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:22.274 [2024-10-14 13:44:13.829451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:22.274 [2024-10-14 13:44:13.829479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.274 [2024-10-14 13:44:13.829494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:22.274 13:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.250 [2024-10-14 13:44:14.831988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:23.250 [2024-10-14 13:44:14.832017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:23.250 [2024-10-14 13:44:14.832030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:23.250 [2024-10-14 13:44:14.832042] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:23.250 [2024-10-14 13:44:14.832062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.250 [2024-10-14 13:44:14.832101] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:23.250 [2024-10-14 13:44:14.832161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.250 [2024-10-14 13:44:14.832206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.250 [2024-10-14 13:44:14.832227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.250 [2024-10-14 13:44:14.832239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.250 [2024-10-14 13:44:14.832252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.250 [2024-10-14 13:44:14.832267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.250 [2024-10-14 13:44:14.832279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.250 [2024-10-14 13:44:14.832291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.250 [2024-10-14 13:44:14.832305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.250 [2024-10-14 13:44:14.832317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.250 [2024-10-14 13:44:14.832331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:23.250 [2024-10-14 13:44:14.832396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159d390 (9): Bad file descriptor 00:33:23.250 [2024-10-14 13:44:14.833391] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:23.250 [2024-10-14 13:44:14.833428] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.250 13:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.250 13:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:23.250 13:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:24.292 13:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.286 [2024-10-14 13:44:16.844951] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:25.286 [2024-10-14 13:44:16.844986] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:25.286 [2024-10-14 13:44:16.845010] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:25.286 [2024-10-14 13:44:16.972450] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:25.286 13:44:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.574 [2024-10-14 13:44:17.157725] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:25.574 [2024-10-14 13:44:17.157774] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:25.574 [2024-10-14 13:44:17.157804] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:25.574 [2024-10-14 13:44:17.157827] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:25.574 [2024-10-14 13:44:17.157841] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:25.574 [2024-10-14 13:44:17.163872] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15a95a0 was disconnected and freed. delete nvme_qpair. 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 372845 ']' 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 372845' 00:33:26.536 killing process with pid 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 372845 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.536 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.536 rmmod nvme_tcp 00:33:26.536 rmmod nvme_fabrics 00:33:26.536 rmmod nvme_keyring 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 372815 ']' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 372815 ']' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 372815' 00:33:26.795 killing process with pid 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 372815 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.795 13:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.335 00:33:29.335 real 0m17.784s 00:33:29.335 user 0m25.885s 00:33:29.335 sys 0m3.003s 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.335 ************************************ 00:33:29.335 END TEST nvmf_discovery_remove_ifc 00:33:29.335 ************************************ 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.335 ************************************ 00:33:29.335 START TEST nvmf_identify_kernel_target 00:33:29.335 ************************************ 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:29.335 * Looking for test storage... 00:33:29.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.335 --rc genhtml_branch_coverage=1 00:33:29.335 --rc genhtml_function_coverage=1 00:33:29.335 --rc genhtml_legend=1 00:33:29.335 --rc geninfo_all_blocks=1 00:33:29.335 --rc geninfo_unexecuted_blocks=1 00:33:29.335 00:33:29.335 ' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.335 --rc genhtml_branch_coverage=1 00:33:29.335 --rc genhtml_function_coverage=1 00:33:29.335 --rc genhtml_legend=1 00:33:29.335 --rc geninfo_all_blocks=1 00:33:29.335 --rc geninfo_unexecuted_blocks=1 00:33:29.335 00:33:29.335 ' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.335 --rc genhtml_branch_coverage=1 00:33:29.335 --rc genhtml_function_coverage=1 00:33:29.335 --rc genhtml_legend=1 00:33:29.335 --rc geninfo_all_blocks=1 00:33:29.335 --rc geninfo_unexecuted_blocks=1 00:33:29.335 00:33:29.335 ' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:29.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.335 --rc genhtml_branch_coverage=1 00:33:29.335 --rc genhtml_function_coverage=1 00:33:29.335 --rc genhtml_legend=1 00:33:29.335 --rc geninfo_all_blocks=1 00:33:29.335 --rc geninfo_unexecuted_blocks=1 00:33:29.335 00:33:29.335 ' 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.335 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:29.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.336 13:44:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.250 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:31.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:31.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:31.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:31.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.251 13:44:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:33:31.251 00:33:31.251 --- 10.0.0.2 ping statistics --- 00:33:31.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.251 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:33:31.251 00:33:31.251 --- 10.0.0.1 ping statistics --- 00:33:31.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.251 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.251 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:31.252 13:44:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:32.634 Waiting for block devices as requested 00:33:32.634 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:32.634 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:32.893 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:32.893 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:32.893 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:32.893 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:33.152 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:33.152 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:33.152 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:33.152 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:33.412 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:33.412 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:33.412 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:33.412 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:33.673 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:33.673 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:33.673 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:33.934 No valid GPT data, bailing 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:33.934 00:33:33.934 Discovery Log Number of Records 2, Generation counter 2 00:33:33.934 =====Discovery Log Entry 0====== 00:33:33.934 trtype: tcp 00:33:33.934 adrfam: ipv4 00:33:33.934 subtype: current discovery subsystem 00:33:33.934 treq: not specified, sq flow control disable supported 00:33:33.934 portid: 1 00:33:33.934 trsvcid: 4420 00:33:33.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:33.934 traddr: 10.0.0.1 00:33:33.934 eflags: none 00:33:33.934 sectype: none 00:33:33.934 =====Discovery Log Entry 1====== 00:33:33.934 trtype: tcp 00:33:33.934 adrfam: ipv4 00:33:33.934 subtype: nvme subsystem 00:33:33.934 treq: not specified, sq flow control disable supported 00:33:33.934 portid: 1 00:33:33.934 trsvcid: 4420 00:33:33.934 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:33.934 traddr: 10.0.0.1 00:33:33.934 eflags: none 00:33:33.934 sectype: none 00:33:33.934 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:33.934 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:34.196 ===================================================== 00:33:34.196 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:34.196 ===================================================== 00:33:34.196 Controller Capabilities/Features 00:33:34.196 ================================ 00:33:34.196 Vendor ID: 0000 00:33:34.196 Subsystem Vendor ID: 0000 00:33:34.196 Serial Number: 4563ec53abcc5a0fd7eb 00:33:34.196 Model Number: Linux 00:33:34.196 Firmware Version: 6.8.9-20 00:33:34.196 Recommended Arb Burst: 0 00:33:34.196 IEEE OUI Identifier: 00 00 00 00:33:34.196 Multi-path I/O 00:33:34.196 May have multiple subsystem ports: No 00:33:34.196 May have multiple controllers: No 00:33:34.196 Associated with SR-IOV VF: No 00:33:34.196 Max Data Transfer Size: Unlimited 00:33:34.196 Max Number of Namespaces: 0 00:33:34.196 Max Number of I/O Queues: 1024 00:33:34.196 NVMe Specification Version (VS): 1.3 00:33:34.196 NVMe Specification Version (Identify): 1.3 00:33:34.196 Maximum Queue Entries: 1024 00:33:34.196 Contiguous Queues Required: No 00:33:34.196 Arbitration Mechanisms Supported 00:33:34.196 Weighted Round Robin: Not Supported 00:33:34.196 Vendor Specific: Not Supported 00:33:34.196 Reset Timeout: 7500 ms 00:33:34.196 Doorbell Stride: 4 bytes 00:33:34.196 NVM Subsystem Reset: Not Supported 00:33:34.196 Command Sets Supported 00:33:34.196 NVM Command Set: Supported 00:33:34.196 Boot Partition: Not Supported 00:33:34.196 Memory Page Size Minimum: 4096 bytes 00:33:34.196 Memory Page Size Maximum: 4096 bytes 00:33:34.196 Persistent Memory Region: Not Supported 00:33:34.196 Optional Asynchronous Events Supported 00:33:34.196 Namespace Attribute Notices: Not Supported 00:33:34.196 Firmware Activation Notices: Not Supported 00:33:34.196 ANA Change Notices: Not Supported 00:33:34.196 PLE Aggregate Log Change Notices: Not Supported 00:33:34.196 LBA Status Info Alert Notices: Not Supported 00:33:34.196 EGE Aggregate Log Change Notices: Not Supported 00:33:34.196 Normal NVM Subsystem Shutdown event: Not Supported 00:33:34.196 Zone Descriptor Change Notices: Not Supported 00:33:34.196 Discovery Log Change Notices: Supported 00:33:34.196 Controller Attributes 00:33:34.196 128-bit Host Identifier: Not Supported 00:33:34.196 Non-Operational Permissive Mode: Not Supported 00:33:34.196 NVM Sets: Not Supported 00:33:34.196 Read Recovery Levels: Not Supported 00:33:34.196 Endurance Groups: Not Supported 00:33:34.196 Predictable Latency Mode: Not Supported 00:33:34.196 Traffic Based Keep ALive: Not Supported 00:33:34.196 Namespace Granularity: Not Supported 00:33:34.196 SQ Associations: Not Supported 00:33:34.196 UUID List: Not Supported 00:33:34.196 Multi-Domain Subsystem: Not Supported 00:33:34.196 Fixed Capacity Management: Not Supported 00:33:34.196 Variable Capacity Management: Not Supported 00:33:34.196 Delete Endurance Group: Not Supported 00:33:34.196 Delete NVM Set: Not Supported 00:33:34.196 Extended LBA Formats Supported: Not Supported 00:33:34.196 Flexible Data Placement Supported: Not Supported 00:33:34.196 00:33:34.196 Controller Memory Buffer Support 00:33:34.196 ================================ 00:33:34.196 Supported: No 00:33:34.196 00:33:34.196 Persistent Memory Region Support 00:33:34.196 ================================ 00:33:34.196 Supported: No 00:33:34.196 00:33:34.196 Admin Command Set Attributes 00:33:34.196 ============================ 00:33:34.196 Security Send/Receive: Not Supported 00:33:34.196 Format NVM: Not Supported 00:33:34.196 Firmware Activate/Download: Not Supported 00:33:34.196 Namespace Management: Not Supported 00:33:34.196 Device Self-Test: Not Supported 00:33:34.196 Directives: Not Supported 00:33:34.196 NVMe-MI: Not Supported 00:33:34.196 Virtualization Management: Not Supported 00:33:34.196 Doorbell Buffer Config: Not Supported 00:33:34.196 Get LBA Status Capability: Not Supported 00:33:34.196 Command & Feature Lockdown Capability: Not Supported 00:33:34.196 Abort Command Limit: 1 00:33:34.196 Async Event Request Limit: 1 00:33:34.196 Number of Firmware Slots: N/A 00:33:34.196 Firmware Slot 1 Read-Only: N/A 00:33:34.196 Firmware Activation Without Reset: N/A 00:33:34.196 Multiple Update Detection Support: N/A 00:33:34.196 Firmware Update Granularity: No Information Provided 00:33:34.196 Per-Namespace SMART Log: No 00:33:34.196 Asymmetric Namespace Access Log Page: Not Supported 00:33:34.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:34.196 Command Effects Log Page: Not Supported 00:33:34.196 Get Log Page Extended Data: Supported 00:33:34.196 Telemetry Log Pages: Not Supported 00:33:34.196 Persistent Event Log Pages: Not Supported 00:33:34.196 Supported Log Pages Log Page: May Support 00:33:34.196 Commands Supported & Effects Log Page: Not Supported 00:33:34.196 Feature Identifiers & Effects Log Page:May Support 00:33:34.196 NVMe-MI Commands & Effects Log Page: May Support 00:33:34.196 Data Area 4 for Telemetry Log: Not Supported 00:33:34.196 Error Log Page Entries Supported: 1 00:33:34.196 Keep Alive: Not Supported 00:33:34.196 00:33:34.196 NVM Command Set Attributes 00:33:34.196 ========================== 00:33:34.196 Submission Queue Entry Size 00:33:34.196 Max: 1 00:33:34.196 Min: 1 00:33:34.196 Completion Queue Entry Size 00:33:34.196 Max: 1 00:33:34.196 Min: 1 00:33:34.196 Number of Namespaces: 0 00:33:34.196 Compare Command: Not Supported 00:33:34.196 Write Uncorrectable Command: Not Supported 00:33:34.196 Dataset Management Command: Not Supported 00:33:34.196 Write Zeroes Command: Not Supported 00:33:34.196 Set Features Save Field: Not Supported 00:33:34.196 Reservations: Not Supported 00:33:34.196 Timestamp: Not Supported 00:33:34.196 Copy: Not Supported 00:33:34.196 Volatile Write Cache: Not Present 00:33:34.196 Atomic Write Unit (Normal): 1 00:33:34.196 Atomic Write Unit (PFail): 1 00:33:34.196 Atomic Compare & Write Unit: 1 00:33:34.196 Fused Compare & Write: Not Supported 00:33:34.196 Scatter-Gather List 00:33:34.196 SGL Command Set: Supported 00:33:34.196 SGL Keyed: Not Supported 00:33:34.196 SGL Bit Bucket Descriptor: Not Supported 00:33:34.196 SGL Metadata Pointer: Not Supported 00:33:34.196 Oversized SGL: Not Supported 00:33:34.197 SGL Metadata Address: Not Supported 00:33:34.197 SGL Offset: Supported 00:33:34.197 Transport SGL Data Block: Not Supported 00:33:34.197 Replay Protected Memory Block: Not Supported 00:33:34.197 00:33:34.197 Firmware Slot Information 00:33:34.197 ========================= 00:33:34.197 Active slot: 0 00:33:34.197 00:33:34.197 00:33:34.197 Error Log 00:33:34.197 ========= 00:33:34.197 00:33:34.197 Active Namespaces 00:33:34.197 ================= 00:33:34.197 Discovery Log Page 00:33:34.197 ================== 00:33:34.197 Generation Counter: 2 00:33:34.197 Number of Records: 2 00:33:34.197 Record Format: 0 00:33:34.197 00:33:34.197 Discovery Log Entry 0 00:33:34.197 ---------------------- 00:33:34.197 Transport Type: 3 (TCP) 00:33:34.197 Address Family: 1 (IPv4) 00:33:34.197 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:34.197 Entry Flags: 00:33:34.197 Duplicate Returned Information: 0 00:33:34.197 Explicit Persistent Connection Support for Discovery: 0 00:33:34.197 Transport Requirements: 00:33:34.197 Secure Channel: Not Specified 00:33:34.197 Port ID: 1 (0x0001) 00:33:34.197 Controller ID: 65535 (0xffff) 00:33:34.197 Admin Max SQ Size: 32 00:33:34.197 Transport Service Identifier: 4420 00:33:34.197 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:34.197 Transport Address: 10.0.0.1 00:33:34.197 Discovery Log Entry 1 00:33:34.197 ---------------------- 00:33:34.197 Transport Type: 3 (TCP) 00:33:34.197 Address Family: 1 (IPv4) 00:33:34.197 Subsystem Type: 2 (NVM Subsystem) 00:33:34.197 Entry Flags: 00:33:34.197 Duplicate Returned Information: 0 00:33:34.197 Explicit Persistent Connection Support for Discovery: 0 00:33:34.197 Transport Requirements: 00:33:34.197 Secure Channel: Not Specified 00:33:34.197 Port ID: 1 (0x0001) 00:33:34.197 Controller ID: 65535 (0xffff) 00:33:34.197 Admin Max SQ Size: 32 00:33:34.197 Transport Service Identifier: 4420 00:33:34.197 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:34.197 Transport Address: 10.0.0.1 00:33:34.197 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:34.197 get_feature(0x01) failed 00:33:34.197 get_feature(0x02) failed 00:33:34.197 get_feature(0x04) failed 00:33:34.197 ===================================================== 00:33:34.197 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:34.197 ===================================================== 00:33:34.197 Controller Capabilities/Features 00:33:34.197 ================================ 00:33:34.197 Vendor ID: 0000 00:33:34.197 Subsystem Vendor ID: 0000 00:33:34.197 Serial Number: 6702c596293bc2e196ad 00:33:34.197 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:34.197 Firmware Version: 6.8.9-20 00:33:34.197 Recommended Arb Burst: 6 00:33:34.197 IEEE OUI Identifier: 00 00 00 00:33:34.197 Multi-path I/O 00:33:34.197 May have multiple subsystem ports: Yes 00:33:34.197 May have multiple controllers: Yes 00:33:34.197 Associated with SR-IOV VF: No 00:33:34.197 Max Data Transfer Size: Unlimited 00:33:34.197 Max Number of Namespaces: 1024 00:33:34.197 Max Number of I/O Queues: 128 00:33:34.197 NVMe Specification Version (VS): 1.3 00:33:34.197 NVMe Specification Version (Identify): 1.3 00:33:34.197 Maximum Queue Entries: 1024 00:33:34.197 Contiguous Queues Required: No 00:33:34.197 Arbitration Mechanisms Supported 00:33:34.197 Weighted Round Robin: Not Supported 00:33:34.197 Vendor Specific: Not Supported 00:33:34.197 Reset Timeout: 7500 ms 00:33:34.197 Doorbell Stride: 4 bytes 00:33:34.197 NVM Subsystem Reset: Not Supported 00:33:34.197 Command Sets Supported 00:33:34.197 NVM Command Set: Supported 00:33:34.197 Boot Partition: Not Supported 00:33:34.197 Memory Page Size Minimum: 4096 bytes 00:33:34.197 Memory Page Size Maximum: 4096 bytes 00:33:34.197 Persistent Memory Region: Not Supported 00:33:34.197 Optional Asynchronous Events Supported 00:33:34.197 Namespace Attribute Notices: Supported 00:33:34.197 Firmware Activation Notices: Not Supported 00:33:34.197 ANA Change Notices: Supported 00:33:34.197 PLE Aggregate Log Change Notices: Not Supported 00:33:34.197 LBA Status Info Alert Notices: Not Supported 00:33:34.197 EGE Aggregate Log Change Notices: Not Supported 00:33:34.197 Normal NVM Subsystem Shutdown event: Not Supported 00:33:34.197 Zone Descriptor Change Notices: Not Supported 00:33:34.197 Discovery Log Change Notices: Not Supported 00:33:34.197 Controller Attributes 00:33:34.197 128-bit Host Identifier: Supported 00:33:34.197 Non-Operational Permissive Mode: Not Supported 00:33:34.197 NVM Sets: Not Supported 00:33:34.197 Read Recovery Levels: Not Supported 00:33:34.197 Endurance Groups: Not Supported 00:33:34.197 Predictable Latency Mode: Not Supported 00:33:34.197 Traffic Based Keep ALive: Supported 00:33:34.197 Namespace Granularity: Not Supported 00:33:34.197 SQ Associations: Not Supported 00:33:34.197 UUID List: Not Supported 00:33:34.197 Multi-Domain Subsystem: Not Supported 00:33:34.197 Fixed Capacity Management: Not Supported 00:33:34.197 Variable Capacity Management: Not Supported 00:33:34.197 Delete Endurance Group: Not Supported 00:33:34.197 Delete NVM Set: Not Supported 00:33:34.197 Extended LBA Formats Supported: Not Supported 00:33:34.197 Flexible Data Placement Supported: Not Supported 00:33:34.197 00:33:34.197 Controller Memory Buffer Support 00:33:34.197 ================================ 00:33:34.197 Supported: No 00:33:34.197 00:33:34.197 Persistent Memory Region Support 00:33:34.197 ================================ 00:33:34.197 Supported: No 00:33:34.197 00:33:34.197 Admin Command Set Attributes 00:33:34.197 ============================ 00:33:34.197 Security Send/Receive: Not Supported 00:33:34.197 Format NVM: Not Supported 00:33:34.197 Firmware Activate/Download: Not Supported 00:33:34.197 Namespace Management: Not Supported 00:33:34.197 Device Self-Test: Not Supported 00:33:34.197 Directives: Not Supported 00:33:34.197 NVMe-MI: Not Supported 00:33:34.197 Virtualization Management: Not Supported 00:33:34.197 Doorbell Buffer Config: Not Supported 00:33:34.197 Get LBA Status Capability: Not Supported 00:33:34.197 Command & Feature Lockdown Capability: Not Supported 00:33:34.197 Abort Command Limit: 4 00:33:34.197 Async Event Request Limit: 4 00:33:34.197 Number of Firmware Slots: N/A 00:33:34.197 Firmware Slot 1 Read-Only: N/A 00:33:34.197 Firmware Activation Without Reset: N/A 00:33:34.197 Multiple Update Detection Support: N/A 00:33:34.197 Firmware Update Granularity: No Information Provided 00:33:34.197 Per-Namespace SMART Log: Yes 00:33:34.197 Asymmetric Namespace Access Log Page: Supported 00:33:34.197 ANA Transition Time : 10 sec 00:33:34.197 00:33:34.197 Asymmetric Namespace Access Capabilities 00:33:34.197 ANA Optimized State : Supported 00:33:34.197 ANA Non-Optimized State : Supported 00:33:34.197 ANA Inaccessible State : Supported 00:33:34.197 ANA Persistent Loss State : Supported 00:33:34.197 ANA Change State : Supported 00:33:34.197 ANAGRPID is not changed : No 00:33:34.197 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:34.197 00:33:34.197 ANA Group Identifier Maximum : 128 00:33:34.197 Number of ANA Group Identifiers : 128 00:33:34.197 Max Number of Allowed Namespaces : 1024 00:33:34.197 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:34.197 Command Effects Log Page: Supported 00:33:34.197 Get Log Page Extended Data: Supported 00:33:34.197 Telemetry Log Pages: Not Supported 00:33:34.197 Persistent Event Log Pages: Not Supported 00:33:34.197 Supported Log Pages Log Page: May Support 00:33:34.197 Commands Supported & Effects Log Page: Not Supported 00:33:34.197 Feature Identifiers & Effects Log Page:May Support 00:33:34.197 NVMe-MI Commands & Effects Log Page: May Support 00:33:34.197 Data Area 4 for Telemetry Log: Not Supported 00:33:34.197 Error Log Page Entries Supported: 128 00:33:34.197 Keep Alive: Supported 00:33:34.197 Keep Alive Granularity: 1000 ms 00:33:34.197 00:33:34.197 NVM Command Set Attributes 00:33:34.197 ========================== 00:33:34.197 Submission Queue Entry Size 00:33:34.197 Max: 64 00:33:34.197 Min: 64 00:33:34.197 Completion Queue Entry Size 00:33:34.197 Max: 16 00:33:34.197 Min: 16 00:33:34.197 Number of Namespaces: 1024 00:33:34.197 Compare Command: Not Supported 00:33:34.197 Write Uncorrectable Command: Not Supported 00:33:34.197 Dataset Management Command: Supported 00:33:34.197 Write Zeroes Command: Supported 00:33:34.197 Set Features Save Field: Not Supported 00:33:34.197 Reservations: Not Supported 00:33:34.197 Timestamp: Not Supported 00:33:34.197 Copy: Not Supported 00:33:34.197 Volatile Write Cache: Present 00:33:34.197 Atomic Write Unit (Normal): 1 00:33:34.197 Atomic Write Unit (PFail): 1 00:33:34.197 Atomic Compare & Write Unit: 1 00:33:34.197 Fused Compare & Write: Not Supported 00:33:34.197 Scatter-Gather List 00:33:34.197 SGL Command Set: Supported 00:33:34.197 SGL Keyed: Not Supported 00:33:34.197 SGL Bit Bucket Descriptor: Not Supported 00:33:34.197 SGL Metadata Pointer: Not Supported 00:33:34.197 Oversized SGL: Not Supported 00:33:34.197 SGL Metadata Address: Not Supported 00:33:34.197 SGL Offset: Supported 00:33:34.198 Transport SGL Data Block: Not Supported 00:33:34.198 Replay Protected Memory Block: Not Supported 00:33:34.198 00:33:34.198 Firmware Slot Information 00:33:34.198 ========================= 00:33:34.198 Active slot: 0 00:33:34.198 00:33:34.198 Asymmetric Namespace Access 00:33:34.198 =========================== 00:33:34.198 Change Count : 0 00:33:34.198 Number of ANA Group Descriptors : 1 00:33:34.198 ANA Group Descriptor : 0 00:33:34.198 ANA Group ID : 1 00:33:34.198 Number of NSID Values : 1 00:33:34.198 Change Count : 0 00:33:34.198 ANA State : 1 00:33:34.198 Namespace Identifier : 1 00:33:34.198 00:33:34.198 Commands Supported and Effects 00:33:34.198 ============================== 00:33:34.198 Admin Commands 00:33:34.198 -------------- 00:33:34.198 Get Log Page (02h): Supported 00:33:34.198 Identify (06h): Supported 00:33:34.198 Abort (08h): Supported 00:33:34.198 Set Features (09h): Supported 00:33:34.198 Get Features (0Ah): Supported 00:33:34.198 Asynchronous Event Request (0Ch): Supported 00:33:34.198 Keep Alive (18h): Supported 00:33:34.198 I/O Commands 00:33:34.198 ------------ 00:33:34.198 Flush (00h): Supported 00:33:34.198 Write (01h): Supported LBA-Change 00:33:34.198 Read (02h): Supported 00:33:34.198 Write Zeroes (08h): Supported LBA-Change 00:33:34.198 Dataset Management (09h): Supported 00:33:34.198 00:33:34.198 Error Log 00:33:34.198 ========= 00:33:34.198 Entry: 0 00:33:34.198 Error Count: 0x3 00:33:34.198 Submission Queue Id: 0x0 00:33:34.198 Command Id: 0x5 00:33:34.198 Phase Bit: 0 00:33:34.198 Status Code: 0x2 00:33:34.198 Status Code Type: 0x0 00:33:34.198 Do Not Retry: 1 00:33:34.198 Error Location: 0x28 00:33:34.198 LBA: 0x0 00:33:34.198 Namespace: 0x0 00:33:34.198 Vendor Log Page: 0x0 00:33:34.198 ----------- 00:33:34.198 Entry: 1 00:33:34.198 Error Count: 0x2 00:33:34.198 Submission Queue Id: 0x0 00:33:34.198 Command Id: 0x5 00:33:34.198 Phase Bit: 0 00:33:34.198 Status Code: 0x2 00:33:34.198 Status Code Type: 0x0 00:33:34.198 Do Not Retry: 1 00:33:34.198 Error Location: 0x28 00:33:34.198 LBA: 0x0 00:33:34.198 Namespace: 0x0 00:33:34.198 Vendor Log Page: 0x0 00:33:34.198 ----------- 00:33:34.198 Entry: 2 00:33:34.198 Error Count: 0x1 00:33:34.198 Submission Queue Id: 0x0 00:33:34.198 Command Id: 0x4 00:33:34.198 Phase Bit: 0 00:33:34.198 Status Code: 0x2 00:33:34.198 Status Code Type: 0x0 00:33:34.198 Do Not Retry: 1 00:33:34.198 Error Location: 0x28 00:33:34.198 LBA: 0x0 00:33:34.198 Namespace: 0x0 00:33:34.198 Vendor Log Page: 0x0 00:33:34.198 00:33:34.198 Number of Queues 00:33:34.198 ================ 00:33:34.198 Number of I/O Submission Queues: 128 00:33:34.198 Number of I/O Completion Queues: 128 00:33:34.198 00:33:34.198 ZNS Specific Controller Data 00:33:34.198 ============================ 00:33:34.198 Zone Append Size Limit: 0 00:33:34.198 00:33:34.198 00:33:34.198 Active Namespaces 00:33:34.198 ================= 00:33:34.198 get_feature(0x05) failed 00:33:34.198 Namespace ID:1 00:33:34.198 Command Set Identifier: NVM (00h) 00:33:34.198 Deallocate: Supported 00:33:34.198 Deallocated/Unwritten Error: Not Supported 00:33:34.198 Deallocated Read Value: Unknown 00:33:34.198 Deallocate in Write Zeroes: Not Supported 00:33:34.198 Deallocated Guard Field: 0xFFFF 00:33:34.198 Flush: Supported 00:33:34.198 Reservation: Not Supported 00:33:34.198 Namespace Sharing Capabilities: Multiple Controllers 00:33:34.198 Size (in LBAs): 1953525168 (931GiB) 00:33:34.198 Capacity (in LBAs): 1953525168 (931GiB) 00:33:34.198 Utilization (in LBAs): 1953525168 (931GiB) 00:33:34.198 UUID: ab4ce400-99f1-4f3b-b3cf-713b41d4ee32 00:33:34.198 Thin Provisioning: Not Supported 00:33:34.198 Per-NS Atomic Units: Yes 00:33:34.198 Atomic Boundary Size (Normal): 0 00:33:34.198 Atomic Boundary Size (PFail): 0 00:33:34.198 Atomic Boundary Offset: 0 00:33:34.198 NGUID/EUI64 Never Reused: No 00:33:34.198 ANA group ID: 1 00:33:34.198 Namespace Write Protected: No 00:33:34.198 Number of LBA Formats: 1 00:33:34.198 Current LBA Format: LBA Format #00 00:33:34.198 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:34.198 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.198 13:44:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.198 rmmod nvme_tcp 00:33:34.198 rmmod nvme_fabrics 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.198 13:44:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:36.746 13:44:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:37.690 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.690 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:37.690 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:38.633 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:38.633 00:33:38.633 real 0m9.677s 00:33:38.633 user 0m2.055s 00:33:38.633 sys 0m3.609s 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.633 ************************************ 00:33:38.633 END TEST nvmf_identify_kernel_target 00:33:38.633 ************************************ 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.633 ************************************ 00:33:38.633 START TEST nvmf_auth_host 00:33:38.633 ************************************ 00:33:38.633 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:38.633 * Looking for test storage... 00:33:38.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.893 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:38.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.893 --rc genhtml_branch_coverage=1 00:33:38.893 --rc genhtml_function_coverage=1 00:33:38.893 --rc genhtml_legend=1 00:33:38.893 --rc geninfo_all_blocks=1 00:33:38.894 --rc geninfo_unexecuted_blocks=1 00:33:38.894 00:33:38.894 ' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:38.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.894 --rc genhtml_branch_coverage=1 00:33:38.894 --rc genhtml_function_coverage=1 00:33:38.894 --rc genhtml_legend=1 00:33:38.894 --rc geninfo_all_blocks=1 00:33:38.894 --rc geninfo_unexecuted_blocks=1 00:33:38.894 00:33:38.894 ' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:38.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.894 --rc genhtml_branch_coverage=1 00:33:38.894 --rc genhtml_function_coverage=1 00:33:38.894 --rc genhtml_legend=1 00:33:38.894 --rc geninfo_all_blocks=1 00:33:38.894 --rc geninfo_unexecuted_blocks=1 00:33:38.894 00:33:38.894 ' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:38.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.894 --rc genhtml_branch_coverage=1 00:33:38.894 --rc genhtml_function_coverage=1 00:33:38.894 --rc genhtml_legend=1 00:33:38.894 --rc geninfo_all_blocks=1 00:33:38.894 --rc geninfo_unexecuted_blocks=1 00:33:38.894 00:33:38.894 ' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.894 13:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.447 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:41.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:41.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:41.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:41.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:33:41.448 00:33:41.448 --- 10.0.0.2 ping statistics --- 00:33:41.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.448 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:33:41.448 00:33:41.448 --- 10.0.0.1 ping statistics --- 00:33:41.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.448 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.448 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=380072 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 380072 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 380072 ']' 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.449 13:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9d466d5ca2a55c1c6a71c7e38f0ad428 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.2FB 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9d466d5ca2a55c1c6a71c7e38f0ad428 0 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9d466d5ca2a55c1c6a71c7e38f0ad428 0 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9d466d5ca2a55c1c6a71c7e38f0ad428 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.2FB 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.2FB 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2FB 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=246d47c39d2b858886a1ed4279ec792c09c74cc1ee9e0143c4823fad112ab2e2 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xvi 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 246d47c39d2b858886a1ed4279ec792c09c74cc1ee9e0143c4823fad112ab2e2 3 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 246d47c39d2b858886a1ed4279ec792c09c74cc1ee9e0143c4823fad112ab2e2 3 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=246d47c39d2b858886a1ed4279ec792c09c74cc1ee9e0143c4823fad112ab2e2 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:41.449 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xvi 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xvi 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xvi 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=aeefd2ea58417ecb663af8a2fc7398e604c37b711ec7fa42 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.HcF 00:33:41.708 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key aeefd2ea58417ecb663af8a2fc7398e604c37b711ec7fa42 0 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 aeefd2ea58417ecb663af8a2fc7398e604c37b711ec7fa42 0 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=aeefd2ea58417ecb663af8a2fc7398e604c37b711ec7fa42 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.HcF 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.HcF 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HcF 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7832a282c446214f471c92b52cb9627ceda6045991be2b71 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.EGb 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7832a282c446214f471c92b52cb9627ceda6045991be2b71 2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7832a282c446214f471c92b52cb9627ceda6045991be2b71 2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7832a282c446214f471c92b52cb9627ceda6045991be2b71 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.EGb 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.EGb 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.EGb 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a75858d0204298590cfcf9ae6c24eed8 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.x5c 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a75858d0204298590cfcf9ae6c24eed8 1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a75858d0204298590cfcf9ae6c24eed8 1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a75858d0204298590cfcf9ae6c24eed8 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.x5c 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.x5c 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x5c 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=74e2d970417d51c1d39022b5caada245 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.fIw 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 74e2d970417d51c1d39022b5caada245 1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 74e2d970417d51c1d39022b5caada245 1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=74e2d970417d51c1d39022b5caada245 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.fIw 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.fIw 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fIw 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=10faea3769bf6aa0896fdcbf793a7329de07019e89e96af4 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.94T 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 10faea3769bf6aa0896fdcbf793a7329de07019e89e96af4 2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 10faea3769bf6aa0896fdcbf793a7329de07019e89e96af4 2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=10faea3769bf6aa0896fdcbf793a7329de07019e89e96af4 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:41.709 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.94T 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.94T 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.94T 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4d7f7d4d9b003369b106e8b0ecb7e00e 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Yxs 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4d7f7d4d9b003369b106e8b0ecb7e00e 0 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4d7f7d4d9b003369b106e8b0ecb7e00e 0 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4d7f7d4d9b003369b106e8b0ecb7e00e 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Yxs 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Yxs 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Yxs 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e9603ab9f596ab8c377c0431dfc8387155ba9f7dac8b0e104f103565859d74ed 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.pKL 00:33:41.968 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e9603ab9f596ab8c377c0431dfc8387155ba9f7dac8b0e104f103565859d74ed 3 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e9603ab9f596ab8c377c0431dfc8387155ba9f7dac8b0e104f103565859d74ed 3 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e9603ab9f596ab8c377c0431dfc8387155ba9f7dac8b0e104f103565859d74ed 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.pKL 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.pKL 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pKL 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 380072 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 380072 ']' 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.969 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2FB 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xvi ]] 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xvi 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HcF 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.EGb ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EGb 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x5c 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fIw ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fIw 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.94T 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Yxs ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Yxs 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pKL 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:42.228 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:42.229 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:42.488 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:42.488 13:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:43.424 Waiting for block devices as requested 00:33:43.424 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:43.683 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:43.683 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:43.683 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:43.942 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:43.942 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:43.942 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.942 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:44.201 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:44.201 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:44.201 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:44.201 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:44.459 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:44.459 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:44.459 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:44.459 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:44.717 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:44.977 No valid GPT data, bailing 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:44.977 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:45.236 00:33:45.236 Discovery Log Number of Records 2, Generation counter 2 00:33:45.236 =====Discovery Log Entry 0====== 00:33:45.236 trtype: tcp 00:33:45.236 adrfam: ipv4 00:33:45.236 subtype: current discovery subsystem 00:33:45.236 treq: not specified, sq flow control disable supported 00:33:45.236 portid: 1 00:33:45.236 trsvcid: 4420 00:33:45.236 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:45.236 traddr: 10.0.0.1 00:33:45.236 eflags: none 00:33:45.236 sectype: none 00:33:45.236 =====Discovery Log Entry 1====== 00:33:45.236 trtype: tcp 00:33:45.236 adrfam: ipv4 00:33:45.236 subtype: nvme subsystem 00:33:45.236 treq: not specified, sq flow control disable supported 00:33:45.236 portid: 1 00:33:45.236 trsvcid: 4420 00:33:45.236 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:45.236 traddr: 10.0.0.1 00:33:45.236 eflags: none 00:33:45.236 sectype: none 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.236 13:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.236 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.495 nvme0n1 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.495 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 nvme0n1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.754 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 nvme0n1 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 nvme0n1 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.014 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 nvme0n1 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.273 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.532 nvme0n1 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.532 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.791 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.792 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.051 nvme0n1 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.051 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.052 13:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.311 nvme0n1 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.311 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.571 nvme0n1 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.571 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.831 nvme0n1 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.831 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.832 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.090 nvme0n1 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.090 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.091 13:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.026 nvme0n1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.026 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.027 13:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.285 nvme0n1 00:33:49.285 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.285 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.285 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.285 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.285 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.544 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.803 nvme0n1 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.803 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.062 nvme0n1 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.062 13:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.321 nvme0n1 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.321 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.579 13:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.479 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.737 nvme0n1 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.737 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.995 13:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.253 nvme0n1 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.253 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.511 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.077 nvme0n1 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.077 13:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.335 nvme0n1 00:33:54.335 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.335 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.335 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.335 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.335 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.593 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.159 nvme0n1 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:55.159 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.160 13:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.092 nvme0n1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.093 13:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 nvme0n1 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.026 13:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.591 nvme0n1 00:33:57.591 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.850 13:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.784 nvme0n1 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.784 13:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 nvme0n1 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 nvme0n1 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.719 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.977 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.978 nvme0n1 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.978 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.238 nvme0n1 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.238 13:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.238 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.496 nvme0n1 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.496 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.497 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.755 nvme0n1 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.755 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.013 nvme0n1 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:01.013 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.014 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.272 nvme0n1 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.272 13:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.529 nvme0n1 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.529 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.530 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.787 nvme0n1 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.788 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.046 nvme0n1 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.046 13:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.305 nvme0n1 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.305 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.306 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.564 nvme0n1 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.564 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.822 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.080 nvme0n1 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.080 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.081 13:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.339 nvme0n1 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.339 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.597 nvme0n1 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.597 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:03.855 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.856 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.114 nvme0n1 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.114 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.372 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.372 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.372 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.372 13:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.372 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.938 nvme0n1 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.938 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.939 13:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.505 nvme0n1 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.505 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.071 nvme0n1 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.071 13:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.636 nvme0n1 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.636 13:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 nvme0n1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.571 13:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.505 nvme0n1 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.505 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.441 nvme0n1 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.441 13:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.441 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.442 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.453 nvme0n1 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.453 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.454 13:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.047 nvme0n1 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.047 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.048 13:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.306 nvme0n1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.306 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.564 nvme0n1 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.564 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.565 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.823 nvme0n1 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.823 nvme0n1 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.823 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.082 nvme0n1 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.082 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.341 13:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.341 nvme0n1 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.341 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.342 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.600 nvme0n1 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.600 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:12.858 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.859 nvme0n1 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.859 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.117 nvme0n1 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.117 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.381 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.382 13:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.382 nvme0n1 00:34:13.382 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.382 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.382 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.382 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.382 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.383 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.643 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.901 nvme0n1 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:13.901 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.902 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.160 nvme0n1 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.160 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.161 13:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.419 nvme0n1 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.419 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.420 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.677 nvme0n1 00:34:14.677 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.678 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.678 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.678 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.678 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.678 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.936 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.195 nvme0n1 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.195 13:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.762 nvme0n1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.762 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.333 nvme0n1 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.333 13:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.333 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.334 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.334 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.334 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.899 nvme0n1 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.899 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.900 13:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.466 nvme0n1 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.466 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.467 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 nvme0n1 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQ0NjZkNWNhMmE1NWMxYzZhNzFjN2UzOGYwYWQ0MjiNrXJW: 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjQ2ZDQ3YzM5ZDJiODU4ODg2YTFlZDQyNzllYzc5MmMwOWM3NGNjMWVlOWUwMTQzYzQ4MjNmYWQxMTJhYjJlMh0RnLc=: 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.034 13:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.971 nvme0n1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.971 13:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.904 nvme0n1 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.904 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.905 13:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.839 nvme0n1 00:34:20.839 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.839 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBmYWVhMzc2OWJmNmFhMDg5NmZkY2JmNzkzYTczMjlkZTA3MDE5ZTg5ZTk2YWY0P2PpOg==: 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGQ3ZjdkNGQ5YjAwMzM2OWIxMDZlOGIwZWNiN2UwMGVdGFZD: 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.840 13:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.774 nvme0n1 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.774 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTk2MDNhYjlmNTk2YWI4YzM3N2MwNDMxZGZjODM4NzE1NWJhOWY3ZGFjOGIwZTEwNGYxMDM1NjU4NTlkNzRlZOzNWjM=: 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.775 13:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.709 nvme0n1 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.709 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.710 request: 00:34:22.710 { 00:34:22.710 "name": "nvme0", 00:34:22.710 "trtype": "tcp", 00:34:22.710 "traddr": "10.0.0.1", 00:34:22.710 "adrfam": "ipv4", 00:34:22.710 "trsvcid": "4420", 00:34:22.710 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:22.710 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:22.710 "prchk_reftag": false, 00:34:22.710 "prchk_guard": false, 00:34:22.710 "hdgst": false, 00:34:22.710 "ddgst": false, 00:34:22.710 "allow_unrecognized_csi": false, 00:34:22.710 "method": "bdev_nvme_attach_controller", 00:34:22.710 "req_id": 1 00:34:22.710 } 00:34:22.710 Got JSON-RPC error response 00:34:22.710 response: 00:34:22.710 { 00:34:22.710 "code": -5, 00:34:22.710 "message": "Input/output error" 00:34:22.710 } 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.710 request: 00:34:22.710 { 00:34:22.710 "name": "nvme0", 00:34:22.710 "trtype": "tcp", 00:34:22.710 "traddr": "10.0.0.1", 00:34:22.710 "adrfam": "ipv4", 00:34:22.710 "trsvcid": "4420", 00:34:22.710 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:22.710 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:22.710 "prchk_reftag": false, 00:34:22.710 "prchk_guard": false, 00:34:22.710 "hdgst": false, 00:34:22.710 "ddgst": false, 00:34:22.710 "dhchap_key": "key2", 00:34:22.710 "allow_unrecognized_csi": false, 00:34:22.710 "method": "bdev_nvme_attach_controller", 00:34:22.710 "req_id": 1 00:34:22.710 } 00:34:22.710 Got JSON-RPC error response 00:34:22.710 response: 00:34:22.710 { 00:34:22.710 "code": -5, 00:34:22.710 "message": "Input/output error" 00:34:22.710 } 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.710 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.711 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:22.711 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.969 request: 00:34:22.969 { 00:34:22.969 "name": "nvme0", 00:34:22.969 "trtype": "tcp", 00:34:22.969 "traddr": "10.0.0.1", 00:34:22.969 "adrfam": "ipv4", 00:34:22.969 "trsvcid": "4420", 00:34:22.969 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:22.969 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:22.969 "prchk_reftag": false, 00:34:22.969 "prchk_guard": false, 00:34:22.969 "hdgst": false, 00:34:22.969 "ddgst": false, 00:34:22.969 "dhchap_key": "key1", 00:34:22.969 "dhchap_ctrlr_key": "ckey2", 00:34:22.969 "allow_unrecognized_csi": false, 00:34:22.969 "method": "bdev_nvme_attach_controller", 00:34:22.969 "req_id": 1 00:34:22.969 } 00:34:22.969 Got JSON-RPC error response 00:34:22.969 response: 00:34:22.969 { 00:34:22.969 "code": -5, 00:34:22.969 "message": "Input/output error" 00:34:22.969 } 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.969 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.228 nvme0n1 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:23.228 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.229 13:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.229 request: 00:34:23.229 { 00:34:23.229 "name": "nvme0", 00:34:23.229 "dhchap_key": "key1", 00:34:23.229 "dhchap_ctrlr_key": "ckey2", 00:34:23.229 "method": "bdev_nvme_set_keys", 00:34:23.229 "req_id": 1 00:34:23.229 } 00:34:23.229 Got JSON-RPC error response 00:34:23.229 response: 00:34:23.229 { 00:34:23.229 "code": -13, 00:34:23.229 "message": "Permission denied" 00:34:23.229 } 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:23.229 13:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWVlZmQyZWE1ODQxN2VjYjY2M2FmOGEyZmM3Mzk4ZTYwNGMzN2I3MTFlYzdmYTQyeJHO8A==: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzgzMmEyODJjNDQ2MjE0ZjQ3MWM5MmI1MmNiOTYyN2NlZGE2MDQ1OTkxYmUyYjcxCf0oKQ==: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 nvme0n1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc1ODU4ZDAyMDQyOTg1OTBjZmNmOWFlNmMyNGVlZDjjz6j+: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzRlMmQ5NzA0MTdkNTFjMWQzOTAyMmI1Y2FhZGEyNDWzly6r: 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 request: 00:34:24.604 { 00:34:24.604 "name": "nvme0", 00:34:24.604 "dhchap_key": "key2", 00:34:24.604 "dhchap_ctrlr_key": "ckey1", 00:34:24.604 "method": "bdev_nvme_set_keys", 00:34:24.604 "req_id": 1 00:34:24.604 } 00:34:24.604 Got JSON-RPC error response 00:34:24.604 response: 00:34:24.604 { 00:34:24.604 "code": -13, 00:34:24.604 "message": "Permission denied" 00:34:24.604 } 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:24.604 13:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:25.537 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.537 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:25.537 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.537 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.537 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:25.797 rmmod nvme_tcp 00:34:25.797 rmmod nvme_fabrics 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 380072 ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 380072 ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 380072' 00:34:25.797 killing process with pid 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 380072 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:25.797 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.058 13:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:27.969 13:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:29.347 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:29.347 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:29.347 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:29.347 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:29.347 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:29.347 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:29.348 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:29.348 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:29.348 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:30.288 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:30.546 13:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2FB /tmp/spdk.key-null.HcF /tmp/spdk.key-sha256.x5c /tmp/spdk.key-sha384.94T /tmp/spdk.key-sha512.pKL /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:30.546 13:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:31.481 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:31.481 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:31.481 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:31.481 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:31.481 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:31.481 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:31.740 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:31.740 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:31.740 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:31.740 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:31.740 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:31.740 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:31.740 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:31.740 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:31.740 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:31.740 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:31.740 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:31.740 00:34:31.740 real 0m53.132s 00:34:31.740 user 0m50.605s 00:34:31.740 sys 0m6.252s 00:34:31.740 13:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.740 13:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.740 ************************************ 00:34:31.740 END TEST nvmf_auth_host 00:34:31.740 ************************************ 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.000 ************************************ 00:34:32.000 START TEST nvmf_digest 00:34:32.000 ************************************ 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:32.000 * Looking for test storage... 00:34:32.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:32.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.000 --rc genhtml_branch_coverage=1 00:34:32.000 --rc genhtml_function_coverage=1 00:34:32.000 --rc genhtml_legend=1 00:34:32.000 --rc geninfo_all_blocks=1 00:34:32.000 --rc geninfo_unexecuted_blocks=1 00:34:32.000 00:34:32.000 ' 00:34:32.000 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:32.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.000 --rc genhtml_branch_coverage=1 00:34:32.000 --rc genhtml_function_coverage=1 00:34:32.000 --rc genhtml_legend=1 00:34:32.000 --rc geninfo_all_blocks=1 00:34:32.001 --rc geninfo_unexecuted_blocks=1 00:34:32.001 00:34:32.001 ' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:32.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.001 --rc genhtml_branch_coverage=1 00:34:32.001 --rc genhtml_function_coverage=1 00:34:32.001 --rc genhtml_legend=1 00:34:32.001 --rc geninfo_all_blocks=1 00:34:32.001 --rc geninfo_unexecuted_blocks=1 00:34:32.001 00:34:32.001 ' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:32.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.001 --rc genhtml_branch_coverage=1 00:34:32.001 --rc genhtml_function_coverage=1 00:34:32.001 --rc genhtml_legend=1 00:34:32.001 --rc geninfo_all_blocks=1 00:34:32.001 --rc geninfo_unexecuted_blocks=1 00:34:32.001 00:34:32.001 ' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:32.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.001 13:45:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:34.536 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.536 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.536 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:34.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:34.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:34.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:34.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.537 13:45:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:34:34.537 00:34:34.537 --- 10.0.0.2 ping statistics --- 00:34:34.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.537 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:34:34.537 00:34:34.537 --- 10.0.0.1 ping statistics --- 00:34:34.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.537 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:34.537 ************************************ 00:34:34.537 START TEST nvmf_digest_clean 00:34:34.537 ************************************ 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:34.537 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=390555 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 390555 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 390555 ']' 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:34.538 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.538 [2024-10-14 13:45:26.233123] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:34.538 [2024-10-14 13:45:26.233230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.538 [2024-10-14 13:45:26.297070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.538 [2024-10-14 13:45:26.341297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.538 [2024-10-14 13:45:26.341347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.538 [2024-10-14 13:45:26.341373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.538 [2024-10-14 13:45:26.341384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.538 [2024-10-14 13:45:26.341395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.538 [2024-10-14 13:45:26.342003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.797 null0 00:34:34.797 [2024-10-14 13:45:26.576013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.797 [2024-10-14 13:45:26.600268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=390582 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 390582 /var/tmp/bperf.sock 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 390582 ']' 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:34.797 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.797 [2024-10-14 13:45:26.647838] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:34.797 [2024-10-14 13:45:26.647905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390582 ] 00:34:35.056 [2024-10-14 13:45:26.706521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.056 [2024-10-14 13:45:26.753009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.056 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:35.056 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:35.056 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:35.056 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:35.056 13:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:35.623 13:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.623 13:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.881 nvme0n1 00:34:35.881 13:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:35.881 13:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:35.881 Running I/O for 2 seconds... 00:34:38.189 18476.00 IOPS, 72.17 MiB/s [2024-10-14T11:45:30.042Z] 18406.50 IOPS, 71.90 MiB/s 00:34:38.189 Latency(us) 00:34:38.189 [2024-10-14T11:45:30.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:38.189 nvme0n1 : 2.01 18414.42 71.93 0.00 0.00 6944.17 3373.89 17573.36 00:34:38.189 [2024-10-14T11:45:30.042Z] =================================================================================================================== 00:34:38.189 [2024-10-14T11:45:30.042Z] Total : 18414.42 71.93 0.00 0.00 6944.17 3373.89 17573.36 00:34:38.189 { 00:34:38.189 "results": [ 00:34:38.189 { 00:34:38.189 "job": "nvme0n1", 00:34:38.189 "core_mask": "0x2", 00:34:38.189 "workload": "randread", 00:34:38.189 "status": "finished", 00:34:38.189 "queue_depth": 128, 00:34:38.189 "io_size": 4096, 00:34:38.189 "runtime": 2.006091, 00:34:38.189 "iops": 18414.418887278793, 00:34:38.189 "mibps": 71.93132377843278, 00:34:38.189 "io_failed": 0, 00:34:38.189 "io_timeout": 0, 00:34:38.189 "avg_latency_us": 6944.171515459586, 00:34:38.189 "min_latency_us": 3373.8903703703704, 00:34:38.189 "max_latency_us": 17573.357037037036 00:34:38.189 } 00:34:38.189 ], 00:34:38.189 "core_count": 1 00:34:38.189 } 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:38.189 | select(.opcode=="crc32c") 00:34:38.189 | "\(.module_name) \(.executed)"' 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 390582 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 390582 ']' 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 390582 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:38.189 13:45:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390582 00:34:38.189 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:38.189 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:38.189 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390582' 00:34:38.189 killing process with pid 390582 00:34:38.189 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 390582 00:34:38.189 Received shutdown signal, test time was about 2.000000 seconds 00:34:38.189 00:34:38.189 Latency(us) 00:34:38.189 [2024-10-14T11:45:30.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.189 [2024-10-14T11:45:30.042Z] =================================================================================================================== 00:34:38.189 [2024-10-14T11:45:30.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.189 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 390582 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=390999 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 390999 /var/tmp/bperf.sock 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 390999 ']' 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.447 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:38.447 [2024-10-14 13:45:30.283793] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:38.447 [2024-10-14 13:45:30.283890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390999 ] 00:34:38.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:38.447 Zero copy mechanism will not be used. 00:34:38.706 [2024-10-14 13:45:30.347758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.706 [2024-10-14 13:45:30.394007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.706 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:38.706 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:38.706 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.706 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.706 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:39.273 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.273 13:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.531 nvme0n1 00:34:39.531 13:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:39.531 13:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:39.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:39.791 Zero copy mechanism will not be used. 00:34:39.791 Running I/O for 2 seconds... 00:34:41.660 5283.00 IOPS, 660.38 MiB/s [2024-10-14T11:45:33.513Z] 5446.00 IOPS, 680.75 MiB/s 00:34:41.660 Latency(us) 00:34:41.660 [2024-10-14T11:45:33.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:41.660 nvme0n1 : 2.04 5337.82 667.23 0.00 0.00 2938.30 776.72 43884.85 00:34:41.660 [2024-10-14T11:45:33.513Z] =================================================================================================================== 00:34:41.660 [2024-10-14T11:45:33.513Z] Total : 5337.82 667.23 0.00 0.00 2938.30 776.72 43884.85 00:34:41.660 { 00:34:41.660 "results": [ 00:34:41.660 { 00:34:41.660 "job": "nvme0n1", 00:34:41.660 "core_mask": "0x2", 00:34:41.660 "workload": "randread", 00:34:41.660 "status": "finished", 00:34:41.660 "queue_depth": 16, 00:34:41.660 "io_size": 131072, 00:34:41.660 "runtime": 2.043529, 00:34:41.660 "iops": 5337.824909751709, 00:34:41.660 "mibps": 667.2281137189636, 00:34:41.660 "io_failed": 0, 00:34:41.660 "io_timeout": 0, 00:34:41.660 "avg_latency_us": 2938.2957432533376, 00:34:41.660 "min_latency_us": 776.7229629629629, 00:34:41.660 "max_latency_us": 43884.847407407404 00:34:41.660 } 00:34:41.660 ], 00:34:41.660 "core_count": 1 00:34:41.660 } 00:34:41.660 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:41.660 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:41.660 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:41.660 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:41.660 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:41.660 | select(.opcode=="crc32c") 00:34:41.660 | "\(.module_name) \(.executed)"' 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 390999 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 390999 ']' 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 390999 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390999 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390999' 00:34:42.226 killing process with pid 390999 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 390999 00:34:42.226 Received shutdown signal, test time was about 2.000000 seconds 00:34:42.226 00:34:42.226 Latency(us) 00:34:42.226 [2024-10-14T11:45:34.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.226 [2024-10-14T11:45:34.079Z] =================================================================================================================== 00:34:42.226 [2024-10-14T11:45:34.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.226 13:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 390999 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=391512 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 391512 /var/tmp/bperf.sock 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 391512 ']' 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.226 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:42.226 [2024-10-14 13:45:34.074477] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:42.226 [2024-10-14 13:45:34.074572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391512 ] 00:34:42.485 [2024-10-14 13:45:34.134590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.485 [2024-10-14 13:45:34.177227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.485 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:42.485 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:42.485 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.485 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.485 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:43.052 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.052 13:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:43.311 nvme0n1 00:34:43.311 13:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:43.311 13:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:43.569 Running I/O for 2 seconds... 00:34:45.434 21095.00 IOPS, 82.40 MiB/s [2024-10-14T11:45:37.287Z] 19875.50 IOPS, 77.64 MiB/s 00:34:45.434 Latency(us) 00:34:45.434 [2024-10-14T11:45:37.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.434 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:45.434 nvme0n1 : 2.01 19873.06 77.63 0.00 0.00 6426.28 2694.26 12621.75 00:34:45.434 [2024-10-14T11:45:37.287Z] =================================================================================================================== 00:34:45.434 [2024-10-14T11:45:37.287Z] Total : 19873.06 77.63 0.00 0.00 6426.28 2694.26 12621.75 00:34:45.434 { 00:34:45.434 "results": [ 00:34:45.434 { 00:34:45.434 "job": "nvme0n1", 00:34:45.434 "core_mask": "0x2", 00:34:45.434 "workload": "randwrite", 00:34:45.434 "status": "finished", 00:34:45.434 "queue_depth": 128, 00:34:45.434 "io_size": 4096, 00:34:45.434 "runtime": 2.008297, 00:34:45.434 "iops": 19873.056624592875, 00:34:45.434 "mibps": 77.62912743981592, 00:34:45.434 "io_failed": 0, 00:34:45.434 "io_timeout": 0, 00:34:45.434 "avg_latency_us": 6426.278663210829, 00:34:45.434 "min_latency_us": 2694.257777777778, 00:34:45.434 "max_latency_us": 12621.748148148148 00:34:45.434 } 00:34:45.434 ], 00:34:45.434 "core_count": 1 00:34:45.434 } 00:34:45.434 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:45.434 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:45.434 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:45.434 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:45.434 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:45.434 | select(.opcode=="crc32c") 00:34:45.434 | "\(.module_name) \(.executed)"' 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 391512 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 391512 ']' 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 391512 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391512 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391512' 00:34:45.692 killing process with pid 391512 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 391512 00:34:45.692 Received shutdown signal, test time was about 2.000000 seconds 00:34:45.692 00:34:45.692 Latency(us) 00:34:45.692 [2024-10-14T11:45:37.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.692 [2024-10-14T11:45:37.545Z] =================================================================================================================== 00:34:45.692 [2024-10-14T11:45:37.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.692 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 391512 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=391915 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 391915 /var/tmp/bperf.sock 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 391915 ']' 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:45.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:45.951 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.951 [2024-10-14 13:45:37.738558] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:45.951 [2024-10-14 13:45:37.738635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391915 ] 00:34:45.951 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:45.951 Zero copy mechanism will not be used. 00:34:45.951 [2024-10-14 13:45:37.798199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.210 [2024-10-14 13:45:37.845806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.210 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:46.210 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:46.210 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:46.210 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:46.210 13:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:46.776 13:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.776 13:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.034 nvme0n1 00:34:47.034 13:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:47.034 13:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.292 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:47.292 Zero copy mechanism will not be used. 00:34:47.292 Running I/O for 2 seconds... 00:34:49.163 6147.00 IOPS, 768.38 MiB/s [2024-10-14T11:45:41.016Z] 6383.00 IOPS, 797.88 MiB/s 00:34:49.163 Latency(us) 00:34:49.163 [2024-10-14T11:45:41.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.163 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:49.163 nvme0n1 : 2.00 6378.22 797.28 0.00 0.00 2497.80 1735.49 13495.56 00:34:49.163 [2024-10-14T11:45:41.016Z] =================================================================================================================== 00:34:49.163 [2024-10-14T11:45:41.016Z] Total : 6378.22 797.28 0.00 0.00 2497.80 1735.49 13495.56 00:34:49.163 { 00:34:49.163 "results": [ 00:34:49.163 { 00:34:49.163 "job": "nvme0n1", 00:34:49.163 "core_mask": "0x2", 00:34:49.163 "workload": "randwrite", 00:34:49.163 "status": "finished", 00:34:49.163 "queue_depth": 16, 00:34:49.163 "io_size": 131072, 00:34:49.163 "runtime": 2.004006, 00:34:49.163 "iops": 6378.224416493763, 00:34:49.163 "mibps": 797.2780520617204, 00:34:49.163 "io_failed": 0, 00:34:49.163 "io_timeout": 0, 00:34:49.163 "avg_latency_us": 2497.795903266747, 00:34:49.163 "min_latency_us": 1735.4903703703703, 00:34:49.163 "max_latency_us": 13495.561481481482 00:34:49.163 } 00:34:49.163 ], 00:34:49.163 "core_count": 1 00:34:49.163 } 00:34:49.163 13:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:49.163 13:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:49.163 13:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:49.163 13:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:49.163 13:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:49.163 | select(.opcode=="crc32c") 00:34:49.163 | "\(.module_name) \(.executed)"' 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 391915 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 391915 ']' 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 391915 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 391915 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 391915' 00:34:49.421 killing process with pid 391915 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 391915 00:34:49.421 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.421 00:34:49.421 Latency(us) 00:34:49.421 [2024-10-14T11:45:41.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.421 [2024-10-14T11:45:41.274Z] =================================================================================================================== 00:34:49.421 [2024-10-14T11:45:41.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.421 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 391915 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 390555 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 390555 ']' 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 390555 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 390555 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 390555' 00:34:49.680 killing process with pid 390555 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 390555 00:34:49.680 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 390555 00:34:49.939 00:34:49.939 real 0m15.516s 00:34:49.939 user 0m30.175s 00:34:49.939 sys 0m4.529s 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:49.939 ************************************ 00:34:49.939 END TEST nvmf_digest_clean 00:34:49.939 ************************************ 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.939 ************************************ 00:34:49.939 START TEST nvmf_digest_error 00:34:49.939 ************************************ 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=392417 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 392417 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 392417 ']' 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:49.939 13:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.197 [2024-10-14 13:45:41.800961] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:50.197 [2024-10-14 13:45:41.801036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.197 [2024-10-14 13:45:41.866913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.197 [2024-10-14 13:45:41.912474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.197 [2024-10-14 13:45:41.912530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.197 [2024-10-14 13:45:41.912544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.197 [2024-10-14 13:45:41.912556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.197 [2024-10-14 13:45:41.912566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.197 [2024-10-14 13:45:41.913209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.197 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.197 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:50.197 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:50.197 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.197 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.456 [2024-10-14 13:45:42.057954] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.456 null0 00:34:50.456 [2024-10-14 13:45:42.161215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.456 [2024-10-14 13:45:42.185422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=392497 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 392497 /var/tmp/bperf.sock 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 392497 ']' 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.456 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.456 [2024-10-14 13:45:42.232660] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:50.456 [2024-10-14 13:45:42.232721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392497 ] 00:34:50.456 [2024-10-14 13:45:42.290047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.715 [2024-10-14 13:45:42.336424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.715 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:50.715 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:50.715 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:50.715 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.973 13:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.231 nvme0n1 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:51.231 13:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.490 Running I/O for 2 seconds... 00:34:51.490 [2024-10-14 13:45:43.201716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.201766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.201784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.215676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.215705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.215721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.231200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.231248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.231265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.242587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.242617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.242632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.256875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.256904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.256920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.270480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.270523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.270539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.285323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.285369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.285386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.297641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.297675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.297691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.310733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.310762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.310778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.327155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.327185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.327216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.490 [2024-10-14 13:45:43.337933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.490 [2024-10-14 13:45:43.337980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.490 [2024-10-14 13:45:43.337997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.353386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.369922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.369953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.369969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.384663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.384693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.384724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.395877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.395904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.395920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.411727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.411756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.411771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.426432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.426479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.426496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.442979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.443008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.443023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.456842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.456874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.468041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.468072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.482589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.482633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.482650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.499549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.499580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.499597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.511791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.511835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.511852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.523431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.523462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.523479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.537197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.537226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.537263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.552357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.552388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.563500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.563528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.563543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.580531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.580560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.580575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.750 [2024-10-14 13:45:43.595986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:51.750 [2024-10-14 13:45:43.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.750 [2024-10-14 13:45:43.596029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.009 [2024-10-14 13:45:43.609772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.009 [2024-10-14 13:45:43.609805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.009 [2024-10-14 13:45:43.609822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.009 [2024-10-14 13:45:43.621018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.009 [2024-10-14 13:45:43.621045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.009 [2024-10-14 13:45:43.621060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.009 [2024-10-14 13:45:43.635374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.009 [2024-10-14 13:45:43.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.635437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.651089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.651150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.651168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.666188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.666258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.678280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.678310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.678327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.692468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.692495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.692512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.708732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.708762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.723638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.723687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.736848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.736878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.736894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.748794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.748822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.748838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.763662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.763690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.763706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.777328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.777356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.777372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.793805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.793832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.793847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.809756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.809803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.822136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.822182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.822200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.834936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.834980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.850203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.850249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.010 [2024-10-14 13:45:43.862322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.010 [2024-10-14 13:45:43.862353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.010 [2024-10-14 13:45:43.862371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.877822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.877850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.877866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.893782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.893810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.893825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.909043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.909077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.909094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.922447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.922477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.922494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.935727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.935768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.935785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.948941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.948970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.948986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.963765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.963795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.963812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.978051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.978097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.978114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:43.989857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:43.989885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:43.989900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.004565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.004592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.004607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.021715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.021758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.021774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.036198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.036228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.036245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.051205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.051236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.051254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.063238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.063269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.063286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.075914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.075942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.075958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.090024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.090053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.090070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.105656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.105684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.105701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.269 [2024-10-14 13:45:44.122031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.269 [2024-10-14 13:45:44.122088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.269 [2024-10-14 13:45:44.122105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.135673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.135720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.135737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.146423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.146468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.146491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.162010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.162039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.162056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.177830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.177874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 18000.00 IOPS, 70.31 MiB/s [2024-10-14T11:45:44.380Z] [2024-10-14 13:45:44.195797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.195825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.195841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.210822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.210865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.210882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.226638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.226666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.226682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.241956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.241984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.242000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.527 [2024-10-14 13:45:44.257461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.527 [2024-10-14 13:45:44.257492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.527 [2024-10-14 13:45:44.257508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.269252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.269281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.269298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.284039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.284087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.284103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.295383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.295411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.295427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.310418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.310460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.310475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.325064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.325092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.325123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.337637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.337664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.337680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.350601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.350628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.350644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.363812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.363854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.363869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.528 [2024-10-14 13:45:44.376882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.528 [2024-10-14 13:45:44.376911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.528 [2024-10-14 13:45:44.376928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.391476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.391521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.391537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.406103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.406142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.406161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.419702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.419729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.419745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.433750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.433811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.445691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.445734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.445751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.458241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.458286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.458302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.471264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.471292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.471306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.486378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.486406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.501255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.501303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.515957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.515991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.516008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.528525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.528586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.540870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.540899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.540914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.786 [2024-10-14 13:45:44.553933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.786 [2024-10-14 13:45:44.553964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.786 [2024-10-14 13:45:44.553981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.568785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.568814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.568830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.579989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.580019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.580036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.594013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.594058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.594075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.608018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.608047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.608079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.618665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.618691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.618707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.787 [2024-10-14 13:45:44.635611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:52.787 [2024-10-14 13:45:44.635654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.787 [2024-10-14 13:45:44.635669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.649226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.649256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.649271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.662694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.662723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.662754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.676160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.676190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.676221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.689445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.689474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.689491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.701927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.701953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.701968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.715034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.715063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.715078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.729532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.729560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.729575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.743690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.743718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.743738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.756553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.756580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.756595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.770671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.770699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.785081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.785126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.785149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.801053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.801084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.801101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.818686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.818714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.818729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.829464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.829508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.045 [2024-10-14 13:45:44.829523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.045 [2024-10-14 13:45:44.843456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.045 [2024-10-14 13:45:44.843484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.046 [2024-10-14 13:45:44.843500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.046 [2024-10-14 13:45:44.857844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.046 [2024-10-14 13:45:44.857886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.046 [2024-10-14 13:45:44.857903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.046 [2024-10-14 13:45:44.873748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.046 [2024-10-14 13:45:44.873785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.046 [2024-10-14 13:45:44.873803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.046 [2024-10-14 13:45:44.886868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.046 [2024-10-14 13:45:44.886898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.046 [2024-10-14 13:45:44.886915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.902080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.902113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.902138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.913867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.913895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.913910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.929452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.929483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.929514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.944542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.944570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.944585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.959290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.959322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.959339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.974347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.974376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.304 [2024-10-14 13:45:44.974392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.304 [2024-10-14 13:45:44.985610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.304 [2024-10-14 13:45:44.985638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:44.985653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.002219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.002264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.002280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.016670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.016701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.016733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.032057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.032087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.043215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.043244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.043260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.057610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.057655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.057671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.071732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.071762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.071778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.085184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.085215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.085232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.098221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.098254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.098271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.110890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.110920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.110956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.123367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.123396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.123412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.137350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.137378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.137394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.305 [2024-10-14 13:45:45.149737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.305 [2024-10-14 13:45:45.149765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.305 [2024-10-14 13:45:45.149781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.563 [2024-10-14 13:45:45.162588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.563 [2024-10-14 13:45:45.162620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.563 [2024-10-14 13:45:45.162637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.563 [2024-10-14 13:45:45.179355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.563 [2024-10-14 13:45:45.179400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.563 [2024-10-14 13:45:45.179418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.563 18218.50 IOPS, 71.17 MiB/s [2024-10-14T11:45:45.416Z] [2024-10-14 13:45:45.192198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1408020) 00:34:53.563 [2024-10-14 13:45:45.192228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.563 [2024-10-14 13:45:45.192244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.563 00:34:53.563 Latency(us) 00:34:53.563 [2024-10-14T11:45:45.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.563 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:53.563 nvme0n1 : 2.01 18212.92 71.14 0.00 0.00 7017.51 3519.53 25243.50 00:34:53.563 [2024-10-14T11:45:45.416Z] =================================================================================================================== 00:34:53.563 [2024-10-14T11:45:45.416Z] Total : 18212.92 71.14 0.00 0.00 7017.51 3519.53 25243.50 00:34:53.563 { 00:34:53.563 "results": [ 00:34:53.563 { 00:34:53.563 "job": "nvme0n1", 00:34:53.563 "core_mask": "0x2", 00:34:53.563 "workload": "randread", 00:34:53.563 "status": "finished", 00:34:53.563 "queue_depth": 128, 00:34:53.563 "io_size": 4096, 00:34:53.563 "runtime": 2.007641, 00:34:53.563 "iops": 18212.91754850593, 00:34:53.563 "mibps": 71.1442091738513, 00:34:53.563 "io_failed": 0, 00:34:53.563 "io_timeout": 0, 00:34:53.563 "avg_latency_us": 7017.505901514806, 00:34:53.563 "min_latency_us": 3519.525925925926, 00:34:53.563 "max_latency_us": 25243.496296296296 00:34:53.563 } 00:34:53.563 ], 00:34:53.563 "core_count": 1 00:34:53.563 } 00:34:53.563 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:53.563 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:53.563 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:53.563 | .driver_specific 00:34:53.563 | .nvme_error 00:34:53.563 | .status_code 00:34:53.563 | .command_transient_transport_error' 00:34:53.563 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 392497 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 392497 ']' 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 392497 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:53.821 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392497 00:34:53.822 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:53.822 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:53.822 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392497' 00:34:53.822 killing process with pid 392497 00:34:53.822 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 392497 00:34:53.822 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.822 00:34:53.822 Latency(us) 00:34:53.822 [2024-10-14T11:45:45.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.822 [2024-10-14T11:45:45.675Z] =================================================================================================================== 00:34:53.822 [2024-10-14T11:45:45.675Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.822 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 392497 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=392902 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 392902 /var/tmp/bperf.sock 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 392902 ']' 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.080 13:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.080 [2024-10-14 13:45:45.768158] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:54.080 [2024-10-14 13:45:45.768250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392902 ] 00:34:54.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:54.080 Zero copy mechanism will not be used. 00:34:54.080 [2024-10-14 13:45:45.828349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.080 [2024-10-14 13:45:45.876564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.339 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.339 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:54.339 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.339 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.597 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.165 nvme0n1 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:55.165 13:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.165 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:55.165 Zero copy mechanism will not be used. 00:34:55.165 Running I/O for 2 seconds... 00:34:55.165 [2024-10-14 13:45:46.920604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.920663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.920685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.926243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.926278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.926297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.933862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.933894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.933925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.940950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.940982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.941000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.945608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.945641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.945659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.950612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.950663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.955915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.955946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.955978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.962791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.962837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.962855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.970018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.970051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.975911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.975984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.981938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.981971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.981989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.987226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.987258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.987276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.993735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.993768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.993786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:46.999608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:46.999640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:46.999658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:47.004821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:47.004853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:47.004871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:47.009997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:47.010029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:47.010047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:47.014490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:47.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:47.014539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.165 [2024-10-14 13:45:47.017412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.165 [2024-10-14 13:45:47.017442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.165 [2024-10-14 13:45:47.017460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.022853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.022891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.022910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.028404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.028435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.028454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.034066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.034099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.034139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.038599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.038663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.043295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.043326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.043344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.048072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.048104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.048121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.052579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.052610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.052627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.057008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.057053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.057071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.061545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.061595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.066065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.066097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.066114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.070595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.070625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.425 [2024-10-14 13:45:47.070643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.425 [2024-10-14 13:45:47.075282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.425 [2024-10-14 13:45:47.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.075329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.080031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.080096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.084866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.084911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.089566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.089596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.089614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.094955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.094986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.095004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.101820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.101853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.101870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.109163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.109220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.116559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.124579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.124627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.124645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.131871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.131904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.131922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.138405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.138453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.145088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.145121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.145150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.151145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.151177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.151196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.157404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.157436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.157455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.163409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.163456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.163473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.169620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.169667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.175724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.175771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.181634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.181667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.181684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.187573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.187604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.193279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.193311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.193329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.198447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.198479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.198513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.203474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.203525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.208695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.208742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.208760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.213685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.213717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.213740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.219397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.219430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.225455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.225487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.225506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.230253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.230285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.230302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.234859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.234891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.234909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.239357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.426 [2024-10-14 13:45:47.239388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.426 [2024-10-14 13:45:47.239405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.426 [2024-10-14 13:45:47.243999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.244029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.244047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.249219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.249251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.249268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.254015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.254063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.258471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.258507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.263195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.263226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.263243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.268030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.268061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.268078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.272698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.272747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.427 [2024-10-14 13:45:47.277527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.427 [2024-10-14 13:45:47.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.427 [2024-10-14 13:45:47.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.282286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.282317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.686 [2024-10-14 13:45:47.282335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.287680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.287712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.686 [2024-10-14 13:45:47.287730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.295241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.295273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.686 [2024-10-14 13:45:47.295291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.301497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.301530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.686 [2024-10-14 13:45:47.301547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.307271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.307303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.686 [2024-10-14 13:45:47.307321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.686 [2024-10-14 13:45:47.312733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.686 [2024-10-14 13:45:47.312765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.312783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.317958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.317990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.318007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.324479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.324529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.331986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.332018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.332036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.335926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.335958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.335976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.341123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.341163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.341181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.347406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.347449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.347467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.354223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.354272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.354296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.361099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.361169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.361187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.369045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.369077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.369108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.376779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.376826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.376848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.383888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.383921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.383939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.390488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.390520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.390552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.396334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.396365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.396398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.401721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.401754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.401772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.408116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.408158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.414649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.414687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.414705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.420476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.420508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.425694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.425726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.425744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.431169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.431204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.431221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.437891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.437937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.437954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.445192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.445224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.450633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.450680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.450697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.456288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.456319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.456335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.461439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.461471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.461490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.467285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.467335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.474116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.474161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.474180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.479811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.479843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.479860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.484465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.687 [2024-10-14 13:45:47.484497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.687 [2024-10-14 13:45:47.484515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.687 [2024-10-14 13:45:47.487419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.487465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.487482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.491952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.491985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.492003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.497302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.497334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.497351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.503348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.503380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.503398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.508624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.508679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.513434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.513465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.518262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.518295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.518313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.522493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.522524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.522542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.527011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.527042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.527060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.531625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.531656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.688 [2024-10-14 13:45:47.536173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.688 [2024-10-14 13:45:47.536205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.688 [2024-10-14 13:45:47.536223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.541225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.541256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.541274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.546760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.546792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.546811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.552176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.552207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.557659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.557691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.557709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.563892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.563924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.563942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.569725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.569756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.569774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.575676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.575708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.575726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.582034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.582068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.582086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.588932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.588964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.588997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.595434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.595466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.595485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.602074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.602106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.602138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.608572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.608604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.608622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.611594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.611625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.611642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.615183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.615213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.615230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.619704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.619735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.619752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.624265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.624314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.629637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.948 [2024-10-14 13:45:47.629669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.948 [2024-10-14 13:45:47.629687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.948 [2024-10-14 13:45:47.634630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.634661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.634679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.640119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.640159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.640177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.645460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.645500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.645519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.651053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.651085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.651102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.655544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.655576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.655594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.660142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.660200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.660217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.664891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.664924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.664942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.669634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.669666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.669683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.674668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.674700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.674718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.680076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.680123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.684879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.684910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.684927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.689408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.689455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.694792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.694825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.694842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.700104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.700142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.700161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.704790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.704821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.704853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.709460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.709491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.709509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.714126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.714179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.714197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.718810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.718857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.718873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.724235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.724267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.724285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.729178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.729210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.729233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.733853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.733885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.733902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.739280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.739313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.739330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.745794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.745834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.745852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.753338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.753371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.753388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.760773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.760805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.760822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.768371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.768404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.768421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.775957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.775991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.776025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.783666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.783699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.949 [2024-10-14 13:45:47.783717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.949 [2024-10-14 13:45:47.791288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.949 [2024-10-14 13:45:47.791328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.950 [2024-10-14 13:45:47.791361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.950 [2024-10-14 13:45:47.799009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:55.950 [2024-10-14 13:45:47.799042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.950 [2024-10-14 13:45:47.799071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.806587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.806620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.806638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.814525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.814558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.822752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.822799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.822816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.829439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.829471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.829488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.837145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.837178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.837196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.844954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.844987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.845005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.852915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.852949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.852967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.860068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.860102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.860120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.865539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.865571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.865588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.870103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.870143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.870163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.874906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.874938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.874956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.879963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.879996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.880014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.885760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.885792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.885810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.891615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.891648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.891666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.898195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.898228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.898246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.903292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.903323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.908691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.908723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.908740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.913519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.913551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.913569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 5469.00 IOPS, 683.62 MiB/s [2024-10-14T11:45:48.063Z] [2024-10-14 13:45:47.920396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.920428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.920447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.925941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.925974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.925992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.932075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.932108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.932126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.938024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.938057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.938075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.943603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.943635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.943654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.949388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.949420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.210 [2024-10-14 13:45:47.949438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.210 [2024-10-14 13:45:47.955171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.210 [2024-10-14 13:45:47.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.955221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.960875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.960908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.966670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.966703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.966722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.972218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.972250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.972268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.978219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.978252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.978271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.984334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.984367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.984386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.990268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.990300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.990318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:47.996335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:47.996368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:47.996386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.002418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.002450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.002475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.008583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.008615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.008633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.014806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.014839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.014857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.020807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.020839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.020858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.026795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.026828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.026846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.032784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.032816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.032835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.039771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.039804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.039822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.047139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.047171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.047189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.055373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.055424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.211 [2024-10-14 13:45:48.063244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.211 [2024-10-14 13:45:48.063284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.211 [2024-10-14 13:45:48.063303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.071398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.071431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.071449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.078472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.078505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.078523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.084526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.084559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.084577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.090640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.090672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.090690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.095887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.095919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.095937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.471 [2024-10-14 13:45:48.101426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.471 [2024-10-14 13:45:48.101457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.471 [2024-10-14 13:45:48.101476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.107515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.107547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.107565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.113616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.113649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.113667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.119185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.119219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.119236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.125280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.125313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.131375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.131408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.131425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.137274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.137307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.137325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.140587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.140619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.140636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.147229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.147278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.152387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.152420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.152438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.157056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.157105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.162735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.162766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.162791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.168610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.168641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.168659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.173884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.173915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.173932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.178596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.178628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.178646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.183309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.183339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.188210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.188240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.188258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.194154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.194186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.194203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.201647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.201678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.201696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.207387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.207418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.207435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.212935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.217991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.218022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.218040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.222541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.222573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.222590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.226630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.226677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.231898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.231929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.231946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.238175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.238207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.238225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.245675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.245706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.245724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.251277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.251309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.251327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.258943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.472 [2024-10-14 13:45:48.258993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.472 [2024-10-14 13:45:48.265890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.472 [2024-10-14 13:45:48.265939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.265957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.271688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.271736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.271753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.277784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.277827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.277844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.284550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.284585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.284605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.290394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.290428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.290447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.295580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.295613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.295632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.301046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.301079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.301097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.306245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.306277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.306295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.311957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.311996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.312014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.317801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.317833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.317850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.473 [2024-10-14 13:45:48.324780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.473 [2024-10-14 13:45:48.324813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.473 [2024-10-14 13:45:48.324831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.331772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.331805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.331823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.337278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.337310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.337328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.342891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.342923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.342941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.347395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.347426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.347443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.351908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.351938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.351956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.356295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.356325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.356342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.360941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.360971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.360988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.365450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.365480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.365496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.369974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.370005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.370022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.374491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.374522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.374540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.379009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.379039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.379056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.383943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.383975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.383993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.389144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.389194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.393691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.393722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.393739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.398484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.398539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.402986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.403016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.405819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.405850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.405866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.409895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.409942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.414893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.414926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.414944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.422512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.422545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.422562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.428754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.428785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.428803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.434664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.434696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.434713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.440761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.440793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.733 [2024-10-14 13:45:48.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.733 [2024-10-14 13:45:48.443725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.733 [2024-10-14 13:45:48.443768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.443786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.449030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.449062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.449080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.455804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.455836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.455854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.462148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.462180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.462198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.467977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.468008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.468026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.473791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.473824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.473842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.480313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.480345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.480362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.487096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.493301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.493334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.493353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.498539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.498572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.498589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.504514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.504546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.504564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.511623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.511655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.511673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.517892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.517925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.517944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.523214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.523247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.523264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.527452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.527484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.527502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.532031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.532062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.532079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.536543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.536573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.536591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.540969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.541000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.541024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.545589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.545619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.545637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.550024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.550054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.550071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.554486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.554516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.554533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.558870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.558901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.558918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.563398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.563446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.566388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.566418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.566435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.570503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.570534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.570551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.734 [2024-10-14 13:45:48.577508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.734 [2024-10-14 13:45:48.577540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.734 [2024-10-14 13:45:48.577557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.735 [2024-10-14 13:45:48.584382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.735 [2024-10-14 13:45:48.584420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.735 [2024-10-14 13:45:48.584439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.994 [2024-10-14 13:45:48.589972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.994 [2024-10-14 13:45:48.590005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.595615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.595648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.595666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.600828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.600860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.600878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.606837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.606869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.606887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.612048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.612080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.612097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.617084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.617116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.617142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.619900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.619931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.619948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.625732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.625778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.625795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.631615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.631645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.631662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.635351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.635381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.635397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.639886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.639917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.639935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.644377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.644406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.644423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.648852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.648899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.653168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.653218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.657489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.657535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.661845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.661875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.661893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.666157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.666201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.666219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.670585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.670614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.670631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.675193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.675222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.675239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.680464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.680513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.685459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.685491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.685509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.690411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.690441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.690459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.694856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.694885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.694902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.699658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.699689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.699707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.704570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.704603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.704621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.709558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.995 [2024-10-14 13:45:48.709590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.995 [2024-10-14 13:45:48.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.995 [2024-10-14 13:45:48.714572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.714604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.714621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.719091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.719123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.719149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.723690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.723737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.723754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.729190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.729238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.734695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.734726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.734744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.741266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.741299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.741318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.746890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.746922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.752242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.752274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.757480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.757531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.762217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.762248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.762265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.767901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.767934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.772896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.772927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.772959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.779492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.779525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.786463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.786496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.786514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.792118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.792156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.792175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.797686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.797718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.797736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.803379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.803417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.803436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.809094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.809126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.809154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.813652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.813683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.813701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.818346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.818395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.823743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.823774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.823792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.828592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.828623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.828641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.833629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.833661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.833679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.839630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.839661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.839680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.996 [2024-10-14 13:45:48.845220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:56.996 [2024-10-14 13:45:48.845254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.996 [2024-10-14 13:45:48.845280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.848758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.848791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.848808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.853204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.853236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.853254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.859160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.859193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.859210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.864471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.864502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.864520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.869325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.869357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.869375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.874147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.874196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.879158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.879200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.879218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.884956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.885003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.885021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.891539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.891596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.897457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.897506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.897523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.903491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.903539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.903557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.909069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.909117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.909142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.256 [2024-10-14 13:45:48.914383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.914415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.914433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.256 5574.00 IOPS, 696.75 MiB/s [2024-10-14T11:45:49.109Z] [2024-10-14 13:45:48.920252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ef500) 00:34:57.256 [2024-10-14 13:45:48.920285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.256 [2024-10-14 13:45:48.920303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.256 00:34:57.256 Latency(us) 00:34:57.256 [2024-10-14T11:45:49.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.256 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:57.256 nvme0n1 : 2.00 5572.30 696.54 0.00 0.00 2866.83 709.97 8543.95 00:34:57.256 [2024-10-14T11:45:49.109Z] =================================================================================================================== 00:34:57.256 [2024-10-14T11:45:49.109Z] Total : 5572.30 696.54 0.00 0.00 2866.83 709.97 8543.95 00:34:57.256 { 00:34:57.256 "results": [ 00:34:57.256 { 00:34:57.256 "job": "nvme0n1", 00:34:57.256 "core_mask": "0x2", 00:34:57.256 "workload": "randread", 00:34:57.256 "status": "finished", 00:34:57.256 "queue_depth": 16, 00:34:57.256 "io_size": 131072, 00:34:57.256 "runtime": 2.003483, 00:34:57.256 "iops": 5572.2958467828275, 00:34:57.256 "mibps": 696.5369808478534, 00:34:57.256 "io_failed": 0, 00:34:57.256 "io_timeout": 0, 00:34:57.256 "avg_latency_us": 2866.829845137147, 00:34:57.256 "min_latency_us": 709.9733333333334, 00:34:57.256 "max_latency_us": 8543.952592592592 00:34:57.256 } 00:34:57.256 ], 00:34:57.256 "core_count": 1 00:34:57.256 } 00:34:57.256 13:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:57.256 13:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:57.256 13:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:57.256 13:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:57.256 | .driver_specific 00:34:57.256 | .nvme_error 00:34:57.256 | .status_code 00:34:57.256 | .command_transient_transport_error' 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 360 > 0 )) 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 392902 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 392902 ']' 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 392902 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392902 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392902' 00:34:57.514 killing process with pid 392902 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 392902 00:34:57.514 Received shutdown signal, test time was about 2.000000 seconds 00:34:57.514 00:34:57.514 Latency(us) 00:34:57.514 [2024-10-14T11:45:49.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.514 [2024-10-14T11:45:49.367Z] =================================================================================================================== 00:34:57.514 [2024-10-14T11:45:49.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.514 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 392902 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=393308 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 393308 /var/tmp/bperf.sock 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 393308 ']' 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.773 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.773 [2024-10-14 13:45:49.495228] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:34:57.773 [2024-10-14 13:45:49.495320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393308 ] 00:34:57.773 [2024-10-14 13:45:49.553047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.773 [2024-10-14 13:45:49.595864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.031 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.031 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:34:58.031 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.031 13:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.290 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.856 nvme0n1 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:58.856 13:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.856 Running I/O for 2 seconds... 00:34:58.856 [2024-10-14 13:45:50.598396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6458 00:34:58.856 [2024-10-14 13:45:50.599430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.599485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.613475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6458 00:34:58.856 [2024-10-14 13:45:50.615091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.615142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.626558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fe2e8 00:34:58.856 [2024-10-14 13:45:50.628370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.628424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.635274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e6738 00:34:58.856 [2024-10-14 13:45:50.635981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.636008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.648291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6cc8 00:34:58.856 [2024-10-14 13:45:50.649244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.649273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.662058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:34:58.856 [2024-10-14 13:45:50.663550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.663578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.674902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166de8a8 00:34:58.856 [2024-10-14 13:45:50.676521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.676565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.683373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5658 00:34:58.856 [2024-10-14 13:45:50.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.684121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.696262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fbcf0 00:34:58.856 [2024-10-14 13:45:50.697150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.697180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:58.856 [2024-10-14 13:45:50.709234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e9e10 00:34:58.856 [2024-10-14 13:45:50.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.856 [2024-10-14 13:45:50.710450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.724068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e95a0 00:34:59.115 [2024-10-14 13:45:50.725775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.725818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.736772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fd208 00:34:59.115 [2024-10-14 13:45:50.738613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.738657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.745486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e2c28 00:34:59.115 [2024-10-14 13:45:50.746407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.746451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.760094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f96f8 00:34:59.115 [2024-10-14 13:45:50.761722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.761766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.771293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fcdd0 00:34:59.115 [2024-10-14 13:45:50.772570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.783237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fb480 00:34:59.115 [2024-10-14 13:45:50.784545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.784588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.795981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7da8 00:34:59.115 [2024-10-14 13:45:50.797404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.115 [2024-10-14 13:45:50.797461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:59.115 [2024-10-14 13:45:50.807583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e1f80 00:34:59.115 [2024-10-14 13:45:50.808605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.808635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.818928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fbcf0 00:34:59.116 [2024-10-14 13:45:50.819877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.819921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.830783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fb8b8 00:34:59.116 [2024-10-14 13:45:50.831772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.831801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.843808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7da8 00:34:59.116 [2024-10-14 13:45:50.844998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.845025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.856857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f81e0 00:34:59.116 [2024-10-14 13:45:50.858144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.858175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.869800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f1ca0 00:34:59.116 [2024-10-14 13:45:50.871213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.871257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.881090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7970 00:34:59.116 [2024-10-14 13:45:50.882292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.892565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fcdd0 00:34:59.116 [2024-10-14 13:45:50.893464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.893493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.905793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5a90 00:34:59.116 [2024-10-14 13:45:50.906647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.906693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.920162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166eaef0 00:34:59.116 [2024-10-14 13:45:50.921751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.921795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.932442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fd640 00:34:59.116 [2024-10-14 13:45:50.934172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.934202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.945507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e3060 00:34:59.116 [2024-10-14 13:45:50.947355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.947415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.958368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e95a0 00:34:59.116 [2024-10-14 13:45:50.960231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.960275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.116 [2024-10-14 13:45:50.967035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f31b8 00:34:59.116 [2024-10-14 13:45:50.967969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.116 [2024-10-14 13:45:50.967998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:50.979486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7970 00:34:59.375 [2024-10-14 13:45:50.980340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:50.980369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:50.991501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f3e60 00:34:59.375 [2024-10-14 13:45:50.992431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:50.992473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.003555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7100 00:34:59.375 [2024-10-14 13:45:51.004464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.004492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.014657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fc560 00:34:59.375 [2024-10-14 13:45:51.015536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.015579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.028617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e8088 00:34:59.375 [2024-10-14 13:45:51.029841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.029884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.040270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:34:59.375 [2024-10-14 13:45:51.041447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.041474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.053034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fc560 00:34:59.375 [2024-10-14 13:45:51.054480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.054507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.065060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e49b0 00:34:59.375 [2024-10-14 13:45:51.066270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.066299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.077204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e49b0 00:34:59.375 [2024-10-14 13:45:51.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.078275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.089404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f0ff8 00:34:59.375 [2024-10-14 13:45:51.090769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.090796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.102064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f31b8 00:34:59.375 [2024-10-14 13:45:51.103543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.103585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.112858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fbcf0 00:34:59.375 [2024-10-14 13:45:51.114429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.114457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.125746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f1868 00:34:59.375 [2024-10-14 13:45:51.126810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.126851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.138045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e99d8 00:34:59.375 [2024-10-14 13:45:51.139396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.375 [2024-10-14 13:45:51.139438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:59.375 [2024-10-14 13:45:51.149399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ebb98 00:34:59.376 [2024-10-14 13:45:51.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.150591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.161547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f3e60 00:34:59.376 [2024-10-14 13:45:51.162671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.162712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.174143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166eaef0 00:34:59.376 [2024-10-14 13:45:51.175552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.175594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.186864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fe2e8 00:34:59.376 [2024-10-14 13:45:51.188421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.188448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.198621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5a90 00:34:59.376 [2024-10-14 13:45:51.199975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.200002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.209713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166edd58 00:34:59.376 [2024-10-14 13:45:51.210920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.210961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:59.376 [2024-10-14 13:45:51.222371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fcdd0 00:34:59.376 [2024-10-14 13:45:51.223776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.376 [2024-10-14 13:45:51.223816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.235086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ee5c8 00:34:59.635 [2024-10-14 13:45:51.236225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.236254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.247146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6020 00:34:59.635 [2024-10-14 13:45:51.248531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.248573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.260388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:34:59.635 [2024-10-14 13:45:51.262143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.262190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.268535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f7da8 00:34:59.635 [2024-10-14 13:45:51.269364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.269392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.281156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166edd58 00:34:59.635 [2024-10-14 13:45:51.282236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.282264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.295769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ea680 00:34:59.635 [2024-10-14 13:45:51.297230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.297258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.305887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166eea00 00:34:59.635 [2024-10-14 13:45:51.306708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.306734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.318159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f0ff8 00:34:59.635 [2024-10-14 13:45:51.319335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.319364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.330422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e0630 00:34:59.635 [2024-10-14 13:45:51.331358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.331385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.342514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e3d08 00:34:59.635 [2024-10-14 13:45:51.343702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.343744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.356161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4140 00:34:59.635 [2024-10-14 13:45:51.357842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.357868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.368831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fc560 00:34:59.635 [2024-10-14 13:45:51.370834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.370861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.377506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f5be8 00:34:59.635 [2024-10-14 13:45:51.378529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.378570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.392056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f0bc0 00:34:59.635 [2024-10-14 13:45:51.393657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.393683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.404474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fc998 00:34:59.635 [2024-10-14 13:45:51.406091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.416085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ec840 00:34:59.635 [2024-10-14 13:45:51.417738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.417779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.428756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f8a50 00:34:59.635 [2024-10-14 13:45:51.430525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.430565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.441423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ea680 00:34:59.635 [2024-10-14 13:45:51.443349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.443376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.450094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5ec8 00:34:59.635 [2024-10-14 13:45:51.450926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.450953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.462758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166de8a8 00:34:59.635 [2024-10-14 13:45:51.463809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.463836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.474045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f0ff8 00:34:59.635 [2024-10-14 13:45:51.475026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.475052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:59.635 [2024-10-14 13:45:51.486931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ebfd0 00:34:59.635 [2024-10-14 13:45:51.488257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.635 [2024-10-14 13:45:51.488301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.499070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ddc00 00:34:59.894 [2024-10-14 13:45:51.499968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.500011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.511404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fb048 00:34:59.894 [2024-10-14 13:45:51.512176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.512204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.523716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ef6a8 00:34:59.894 [2024-10-14 13:45:51.524804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.524829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.538350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6cc8 00:34:59.894 [2024-10-14 13:45:51.540227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.540255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.546974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f8a50 00:34:59.894 [2024-10-14 13:45:51.547923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.547948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.559734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e6300 00:34:59.894 [2024-10-14 13:45:51.560713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.560755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.572161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fda78 00:34:59.894 [2024-10-14 13:45:51.573486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.573518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.586644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166de8a8 00:34:59.894 20909.00 IOPS, 81.68 MiB/s [2024-10-14T11:45:51.747Z] [2024-10-14 13:45:51.588525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.595282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ecc78 00:34:59.894 [2024-10-14 13:45:51.596193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.596221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.607994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6020 00:34:59.894 [2024-10-14 13:45:51.609074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.894 [2024-10-14 13:45:51.609115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:59.894 [2024-10-14 13:45:51.620754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166de038 00:34:59.895 [2024-10-14 13:45:51.621980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.622009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.633426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166df988 00:34:59.895 [2024-10-14 13:45:51.634929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.634970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.643892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f3a28 00:34:59.895 [2024-10-14 13:45:51.644697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.644724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.658027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fb048 00:34:59.895 [2024-10-14 13:45:51.659693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.659734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.668805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5220 00:34:59.895 [2024-10-14 13:45:51.670522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.679275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166df550 00:34:59.895 [2024-10-14 13:45:51.680222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.680251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.691918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fdeb0 00:34:59.895 [2024-10-14 13:45:51.692927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.692953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.704346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fc998 00:34:59.895 [2024-10-14 13:45:51.705418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.705444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.716009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f1430 00:34:59.895 [2024-10-14 13:45:51.717028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.730565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166df550 00:34:59.895 [2024-10-14 13:45:51.732062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.732088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.895 [2024-10-14 13:45:51.740341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ecc78 00:34:59.895 [2024-10-14 13:45:51.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.895 [2024-10-14 13:45:51.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.153 [2024-10-14 13:45:51.754009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ecc78 00:35:00.153 [2024-10-14 13:45:51.755534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.153 [2024-10-14 13:45:51.755561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.153 [2024-10-14 13:45:51.766491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4578 00:35:00.153 [2024-10-14 13:45:51.767972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.153 [2024-10-14 13:45:51.767999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.153 [2024-10-14 13:45:51.777392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ee190 00:35:00.153 [2024-10-14 13:45:51.778636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.153 [2024-10-14 13:45:51.778666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:00.153 [2024-10-14 13:45:51.788434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166eb760 00:35:00.153 [2024-10-14 13:45:51.789379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.153 [2024-10-14 13:45:51.789407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:00.153 [2024-10-14 13:45:51.799729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e6b70 00:35:00.153 [2024-10-14 13:45:51.800521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.800547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.813700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5a90 00:35:00.154 [2024-10-14 13:45:51.814740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.814768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.825895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e5220 00:35:00.154 [2024-10-14 13:45:51.826953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.826994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.836982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e01f8 00:35:00.154 [2024-10-14 13:45:51.838030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.838071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.851721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fbcf0 00:35:00.154 [2024-10-14 13:45:51.853356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.853386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.862481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e3d08 00:35:00.154 [2024-10-14 13:45:51.863595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.863622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.874362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166df550 00:35:00.154 [2024-10-14 13:45:51.875413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.875442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.886786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e9e10 00:35:00.154 [2024-10-14 13:45:51.887797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.887830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.898302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f8618 00:35:00.154 [2024-10-14 13:45:51.899540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.899567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.909812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166efae0 00:35:00.154 [2024-10-14 13:45:51.910802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.910843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.921198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f57b0 00:35:00.154 [2024-10-14 13:45:51.922229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.922258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.935930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ef270 00:35:00.154 [2024-10-14 13:45:51.937523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.937550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.949075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f46d0 00:35:00.154 [2024-10-14 13:45:51.950754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.950796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.961475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f3a28 00:35:00.154 [2024-10-14 13:45:51.963187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.963230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.969704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166ef6a8 00:35:00.154 [2024-10-14 13:45:51.970481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.970530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.982305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fe2e8 00:35:00.154 [2024-10-14 13:45:51.983297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.983339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:00.154 [2024-10-14 13:45:51.997073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166fac10 00:35:00.154 [2024-10-14 13:45:51.998548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.154 [2024-10-14 13:45:51.998574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.009719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f4b08 00:35:00.413 [2024-10-14 13:45:52.011521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.011550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.022478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f6458 00:35:00.413 [2024-10-14 13:45:52.024414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.024457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.031157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166f1430 00:35:00.413 [2024-10-14 13:45:52.031961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.031987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.044958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.045283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.045311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.059599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.059940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.059969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.074431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.074773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.089246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.089543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.089586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.104049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.104319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.104347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.118768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.119021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.119064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.133331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.133575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.133603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.147643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.147952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.162265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.162561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.162589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.177025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.177284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.177313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.191436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.413 [2024-10-14 13:45:52.191762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.413 [2024-10-14 13:45:52.191789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.413 [2024-10-14 13:45:52.206001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.414 [2024-10-14 13:45:52.206266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.414 [2024-10-14 13:45:52.206293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.414 [2024-10-14 13:45:52.220734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.414 [2024-10-14 13:45:52.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.414 [2024-10-14 13:45:52.221014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.414 [2024-10-14 13:45:52.235585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.414 [2024-10-14 13:45:52.235903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.414 [2024-10-14 13:45:52.235940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.414 [2024-10-14 13:45:52.250633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.414 [2024-10-14 13:45:52.250950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.414 [2024-10-14 13:45:52.250977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.414 [2024-10-14 13:45:52.265388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.414 [2024-10-14 13:45:52.265630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.414 [2024-10-14 13:45:52.265659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.279795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.280039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.280067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.294464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.294745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.294773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.309055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.309289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.309317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.323720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.323963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.323992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.338337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.338579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.338607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.353094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.353328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.353355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.367716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.367967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.367994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.382257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.382525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.382552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.396885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.397143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.397171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.411258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.411483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.411511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.425874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.426115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.426149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.440325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.440596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.455030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.455293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.455321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.469520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.469763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.469790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.484166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.484390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.484417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.498736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.674 [2024-10-14 13:45:52.498979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.674 [2024-10-14 13:45:52.499007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.674 [2024-10-14 13:45:52.513493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.675 [2024-10-14 13:45:52.513782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.675 [2024-10-14 13:45:52.513810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.675 [2024-10-14 13:45:52.527972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.675 [2024-10-14 13:45:52.528211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.675 [2024-10-14 13:45:52.528250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.933 [2024-10-14 13:45:52.542596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.933 [2024-10-14 13:45:52.542902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.934 [2024-10-14 13:45:52.542930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.934 [2024-10-14 13:45:52.557270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.934 [2024-10-14 13:45:52.557529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.934 [2024-10-14 13:45:52.557557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.934 [2024-10-14 13:45:52.571897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.934 [2024-10-14 13:45:52.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.934 [2024-10-14 13:45:52.572233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.934 [2024-10-14 13:45:52.586551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff380) with pdu=0x2000166e4de8 00:35:00.934 [2024-10-14 13:45:52.586795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.934 [2024-10-14 13:45:52.586822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.934 20035.50 IOPS, 78.26 MiB/s 00:35:00.934 Latency(us) 00:35:00.934 [2024-10-14T11:45:52.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.934 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.934 nvme0n1 : 2.01 20027.86 78.23 0.00 0.00 6376.07 2669.99 16505.36 00:35:00.934 [2024-10-14T11:45:52.787Z] =================================================================================================================== 00:35:00.934 [2024-10-14T11:45:52.787Z] Total : 20027.86 78.23 0.00 0.00 6376.07 2669.99 16505.36 00:35:00.934 { 00:35:00.934 "results": [ 00:35:00.934 { 00:35:00.934 "job": "nvme0n1", 00:35:00.934 "core_mask": "0x2", 00:35:00.934 "workload": "randwrite", 00:35:00.934 "status": "finished", 00:35:00.934 "queue_depth": 128, 00:35:00.934 "io_size": 4096, 00:35:00.934 "runtime": 2.008752, 00:35:00.934 "iops": 20027.858092985098, 00:35:00.934 "mibps": 78.23382067572304, 00:35:00.934 "io_failed": 0, 00:35:00.934 "io_timeout": 0, 00:35:00.934 "avg_latency_us": 6376.069976625727, 00:35:00.934 "min_latency_us": 2669.9851851851854, 00:35:00.934 "max_latency_us": 16505.36296296296 00:35:00.934 } 00:35:00.934 ], 00:35:00.934 "core_count": 1 00:35:00.934 } 00:35:00.934 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:00.934 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:00.934 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:00.934 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:00.934 | .driver_specific 00:35:00.934 | .nvme_error 00:35:00.934 | .status_code 00:35:00.934 | .command_transient_transport_error' 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 393308 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 393308 ']' 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 393308 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393308 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393308' 00:35:01.193 killing process with pid 393308 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 393308 00:35:01.193 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.193 00:35:01.193 Latency(us) 00:35:01.193 [2024-10-14T11:45:53.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.193 [2024-10-14T11:45:53.046Z] =================================================================================================================== 00:35:01.193 [2024-10-14T11:45:53.046Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.193 13:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 393308 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=393753 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 393753 /var/tmp/bperf.sock 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 393753 ']' 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.451 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.452 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.452 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.452 [2024-10-14 13:45:53.181756] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:01.452 [2024-10-14 13:45:53.181848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393753 ] 00:35:01.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:01.452 Zero copy mechanism will not be used. 00:35:01.452 [2024-10-14 13:45:53.251187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.452 [2024-10-14 13:45:53.300814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.710 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:01.710 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:01.710 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.710 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.968 13:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.534 nvme0n1 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:02.534 13:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.534 Zero copy mechanism will not be used. 00:35:02.534 Running I/O for 2 seconds... 00:35:02.534 [2024-10-14 13:45:54.322958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.323311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.323362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.329393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.329725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.329756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.335967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.336311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.336341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.342520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.342848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.342876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.349074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.349412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.349443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.355550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.534 [2024-10-14 13:45:54.355908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.534 [2024-10-14 13:45:54.362831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.534 [2024-10-14 13:45:54.363151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.535 [2024-10-14 13:45:54.363181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.535 [2024-10-14 13:45:54.370364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.535 [2024-10-14 13:45:54.370699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.535 [2024-10-14 13:45:54.370729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.535 [2024-10-14 13:45:54.377715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.535 [2024-10-14 13:45:54.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.535 [2024-10-14 13:45:54.378081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.535 [2024-10-14 13:45:54.384039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.535 [2024-10-14 13:45:54.384382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.535 [2024-10-14 13:45:54.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.389359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.389642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.389680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.394357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.394676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.394730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.398999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.399345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.399391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.403280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.403603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.403643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.408059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.408322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.408379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.412590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.412888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.412937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.416904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.417191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.417221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.421282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.421532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.421597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.425624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.425903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.425963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.430057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.430286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.430332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.434596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.434858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.434888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.439637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.440000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.440030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.444751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.444949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.444979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.450887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.451067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.455630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.455779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.455826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.460023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.460220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.460264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.464706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.464821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.464876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.469817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.469940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.469970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.474805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.474949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.474978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.479565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.479735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.479778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.484306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.484422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.488816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.488952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.488982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.493285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.493401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.795 [2024-10-14 13:45:54.493431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.795 [2024-10-14 13:45:54.498565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.795 [2024-10-14 13:45:54.498684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.498734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.504082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.504293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.504322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.509870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.510025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.510054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.514909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.515106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.519725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.519892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.519922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.524758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.524979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.525009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.529940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.530237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.530291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.535258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.535434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.535499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.540433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.540652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.540715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.545374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.545520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.545567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.550163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.550363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.550407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.555432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.555561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.555590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.561078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.561217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.561247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.567171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.567436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.567466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.572879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.573025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.579199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.579418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.579447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.585423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.585630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.585660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.591285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.591482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.591511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.596617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.596831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.596861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.601392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.601525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.601572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.606483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.606711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.606756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.611755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.611900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.611929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.617049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.617243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.617273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.622299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.622457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.622486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.627488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.627661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.627690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.632802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.632992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.633021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.637900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.638083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.638148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.643111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.643315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.643372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.796 [2024-10-14 13:45:54.648418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:02.796 [2024-10-14 13:45:54.648622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.796 [2024-10-14 13:45:54.648651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.056 [2024-10-14 13:45:54.653593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.056 [2024-10-14 13:45:54.653833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.653887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.659012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.659288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.659318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.664224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.664372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.664401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.669500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.669700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.669729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.674647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.674911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.680049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.680254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.685257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.685448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.685477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.690468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.690668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.690734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.695749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.695984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.696014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.701003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.701213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.701269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.706266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.706410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.706439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.711517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.711734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.711763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.716748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.716947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.716977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.721951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.722159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.722189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.727201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.727368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.727397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.732294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.732411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.732440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.737465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.737644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.737673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.742808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.742973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.743002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.748054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.748306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.753288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.753505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.753534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.758560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.758720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.758749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.763775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.763931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.763960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.768824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.768989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.769018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.773997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.774238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.779185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.779436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.779465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.784346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.784519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.784548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.789423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.057 [2024-10-14 13:45:54.789594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.057 [2024-10-14 13:45:54.789623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.057 [2024-10-14 13:45:54.794633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.794777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.794805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.800027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.800266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.800296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.805211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.805475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.805504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.810382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.810550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.810579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.815614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.815766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.815796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.820851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.821166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.826152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.826307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.826374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.831431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.831638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.831668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.836584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.836851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.836907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.841754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.841907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.841935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.846850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.847111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.847151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.852059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.852275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.857301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.857542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.857571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.862537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.862760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.862790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.867805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.867950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.867979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.872991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.873170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.873201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.878373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.878619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.878648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.883628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.883830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.883860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.888759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.888889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.888918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.893897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.894116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.894153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.899252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.899454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.899483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.904475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.904652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.904681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.058 [2024-10-14 13:45:54.909703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.058 [2024-10-14 13:45:54.909862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.058 [2024-10-14 13:45:54.909891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.915146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.915311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.915341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.920451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.920665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.920694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.925906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.926146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.926204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.931927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.932083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.932112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.938162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.938293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.938348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.944215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.944438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.950426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.950522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.950571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.956025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.956148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.956211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.961714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.961809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.961859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.966623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.966731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.966771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.971312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.971486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.971545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.975883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.976060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.976108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.980379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.980571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.980614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.984897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.985059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.985099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.989391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.989567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.989613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.993919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.318 [2024-10-14 13:45:54.994080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.318 [2024-10-14 13:45:54.994123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.318 [2024-10-14 13:45:54.998371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:54.998528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:54.998574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.002939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.003092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.003138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.007395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.007544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.007612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.011870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.012040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.016387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.016615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.020825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.020981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.021040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.025209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.025341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.029616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.029770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.029830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.034102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.034255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.034298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.038443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.038609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.038670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.042877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.043061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.043109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.047286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.047400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.047439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.051729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.051874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.051913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.056049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.056216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.056290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.060366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.060497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.060550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.064738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.064892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.064941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.069203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.069349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.069397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.073736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.073906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.078749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.078923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.078953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.084019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.084277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.084337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.089777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.089974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.090004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.095195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.095352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.099597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.099761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.099805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.104315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.104483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.104512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.108809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.108958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.319 [2024-10-14 13:45:55.109007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.319 [2024-10-14 13:45:55.113569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.319 [2024-10-14 13:45:55.113802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.113832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.118816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.119002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.124345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.124595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.124624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.130426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.130534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.130564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.136582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.136795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.142807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.143048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.149295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.149520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.155483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.155621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.155650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.161580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.161716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.166758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.166856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.166896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.320 [2024-10-14 13:45:55.171177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.320 [2024-10-14 13:45:55.171353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.320 [2024-10-14 13:45:55.171418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.175553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.175657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.175687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.179919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.180043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.180072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.184278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.184375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.184449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.188731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.188839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.188868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.193227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.193379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.193415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.197774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.197924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.197963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.202168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.202290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.202319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.206670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.206827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.206872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.211168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.211249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.211314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.215583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.215691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.219994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.220120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.220187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.224432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.224537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.224585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.228828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.228988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.229033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.233305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.233430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.233482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.237846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.237960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.238014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.242328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.242455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.242483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.246741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.246881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.246929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.251210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.251329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.251381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.255694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.255782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.255844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.260138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.260240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.264523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.264707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.264768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.268934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.580 [2024-10-14 13:45:55.269068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.580 [2024-10-14 13:45:55.269106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.580 [2024-10-14 13:45:55.273355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.273453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.273505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.277809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.277939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.277992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.282290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.282401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.282453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.286644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.286769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.286798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.291097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.291215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.291260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.295509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.295628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.295686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.299907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.300031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.300060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.304250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.304397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.304439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.308747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.308869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.308929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.313159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.313314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.313367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.318968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 6102.00 IOPS, 762.75 MiB/s [2024-10-14T11:45:55.434Z] [2024-10-14 13:45:55.319304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.319348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.323485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.323592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.323640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.328024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.328229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.328258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.333095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.333287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.333327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.338226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.338401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.343953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.344034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.344062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.349644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.349847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.349877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.355854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.355979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.356031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.361508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.361633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.361700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.367080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.367583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.367642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.372605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.372740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.372800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.378315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.378395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.378423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.383435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.383558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.383612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.388035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.388334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.392499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.392667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.396954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.397142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.397191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.401332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.401523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.581 [2024-10-14 13:45:55.401573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.581 [2024-10-14 13:45:55.405768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.581 [2024-10-14 13:45:55.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.405988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.410153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.410323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.410352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.414726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.414872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.414932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.419192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.419361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.419416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.423675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.423846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.423896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.428154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.428309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.428349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.582 [2024-10-14 13:45:55.432498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.582 [2024-10-14 13:45:55.432688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.582 [2024-10-14 13:45:55.432743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.436979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.437101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.437158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.441422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.441617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.441659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.445980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.446175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.446223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.450483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.450656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.455008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.455165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.455212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.459448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.459579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.841 [2024-10-14 13:45:55.459625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.841 [2024-10-14 13:45:55.464037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.841 [2024-10-14 13:45:55.464234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.464291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.468545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.468712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.468769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.473065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.473243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.473280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.477506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.477657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.477698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.482033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.482248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.482278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.486578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.486750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.486813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.491084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.491235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.491281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.495592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.495815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.500100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.500302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.500333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.504566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.504737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.504783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.509093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.509283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.509332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.513646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.513797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.513836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.518116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.518307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.518365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.522645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.522793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.522830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.527229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.527416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.527459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.531683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.531830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.531874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.536293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.536533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.536577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.540728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.545142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.545318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.545365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.549666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.549827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.549879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.554029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.554209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.554257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.558365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.558526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.558602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.562750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.562909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.562972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.567125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.567361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.567411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.571561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.571718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.571759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.576013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.576201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.576257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.580485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.580662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.580710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.584911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.589257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.589406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.593632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.593776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.593825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.598173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.598317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.598391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.602990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.603123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.603163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.607729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.607891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.607936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.612499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.612640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.612698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.617504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.617646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.617675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.622779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.622935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.622988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.627498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.627720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.627786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.631915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.632062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.632099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.636416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.636577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.641437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.842 [2024-10-14 13:45:55.641693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.842 [2024-10-14 13:45:55.641747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.842 [2024-10-14 13:45:55.646413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.646599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.646648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.651638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.651866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.651922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.657685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.657861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.657926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.662185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.662414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.662502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.666611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.666778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.671626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.671767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.671797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.676884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.677053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.677096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.681434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.681683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.686045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.686252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.690656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:03.843 [2024-10-14 13:45:55.690844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.843 [2024-10-14 13:45:55.690898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.843 [2024-10-14 13:45:55.695149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.695368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.695418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.700245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.700520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.700584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.705491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.705722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.705774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.711954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.712240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.712286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.716782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.716918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.716947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.721182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.721320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.721371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.725766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.725914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.725962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.731283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.731418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.731475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.735849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.735960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.736016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.740263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.740402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.740431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.744599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.744727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.744769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.748962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.749073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.749117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.753270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.757513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.757641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.757698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.761883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.762019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.762068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.766278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.766364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.766418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.770516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.770641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.770694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.774916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.775066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.775093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.779409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.779518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.779586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.783766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.783891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.783940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.788243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.103 [2024-10-14 13:45:55.788370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.103 [2024-10-14 13:45:55.788419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.103 [2024-10-14 13:45:55.792505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.792595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.792650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.797014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.797173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.797202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.801352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.801467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.801496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.805727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.805847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.805876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.810137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.810251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.810280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.814574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.814676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.814703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.818984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.819101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.819143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.823419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.823524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.823575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.827772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.827914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.827957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.832244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.832394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.836633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.836753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.836791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.840976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.841054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.841109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.845350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.845463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.845492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.850016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.850166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.850199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.854936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.855044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.855087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.860330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.860501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.865391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.865507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.865563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.870000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.870144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.870191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.874462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.874580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.874632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.878922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.879047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.879109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.883711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.883883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.883936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.888308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.888432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.893005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.893177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.897616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.897735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.897765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.902074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.902201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.902250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.907100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.907376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.907405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.912506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.912728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.912758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.918079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.918329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.918358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.923742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.104 [2024-10-14 13:45:55.923877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.104 [2024-10-14 13:45:55.923920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.104 [2024-10-14 13:45:55.929074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.105 [2024-10-14 13:45:55.929229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.105 [2024-10-14 13:45:55.929258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.105 [2024-10-14 13:45:55.934486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.105 [2024-10-14 13:45:55.934728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.105 [2024-10-14 13:45:55.934771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.105 [2024-10-14 13:45:55.939918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.105 [2024-10-14 13:45:55.940053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.105 [2024-10-14 13:45:55.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.105 [2024-10-14 13:45:55.945211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.105 [2024-10-14 13:45:55.945384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.105 [2024-10-14 13:45:55.945421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.105 [2024-10-14 13:45:55.950425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.105 [2024-10-14 13:45:55.950588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.105 [2024-10-14 13:45:55.950618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.105 [2024-10-14 13:45:55.956020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.956183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.961375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.961528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.966786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.966976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.967005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.972008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.972190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.972219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.977191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.977388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.977418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.982294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.982494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.987775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.987924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.987952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.992986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.993165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.993193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:55.998428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:55.998601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:55.998630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:56.003760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:56.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:56.003960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:56.008976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:56.009143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:56.009188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.364 [2024-10-14 13:45:56.014068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.364 [2024-10-14 13:45:56.014202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.364 [2024-10-14 13:45:56.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.019303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.019426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.019455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.024581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.024741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.024769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.029689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.029844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.029872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.034989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.035184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.035213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.040173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.040387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.040416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.045264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.045470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.045499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.050455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.050672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.050701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.055736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.055879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.060991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.061217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.061246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.066372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.066546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.071588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.071757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.071786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.076659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.076845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.076873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.081878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.082044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.082077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.087217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.087373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.087402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.092514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.092710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.092739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.097603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.097802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.097845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.103727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.103837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.103885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.108533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.108655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.108718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.113031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.113165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.113220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.117724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.117874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.117902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.122198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.122333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.122381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.127069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.127309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.127338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.132353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.132511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.132540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.138053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.138229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.138257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.143537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.143665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.143718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.148030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.148199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.148249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.152788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.152937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.152988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.157551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.365 [2024-10-14 13:45:56.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.365 [2024-10-14 13:45:56.162080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.365 [2024-10-14 13:45:56.162231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.166753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.166882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.166910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.172732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.172941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.177962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.178126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.178204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.183292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.187945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.188092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.188120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.193157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.193289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.193317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.198467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.198612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.198640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.203717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.203963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.203992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.208869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.209041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.209069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.366 [2024-10-14 13:45:56.214022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.366 [2024-10-14 13:45:56.214137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.366 [2024-10-14 13:45:56.214173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.219571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.219722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.219751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.224847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.225045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.225074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.230053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.230258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.230286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.235168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.235341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.235369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.240262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.240422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.240450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.245525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.245750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.245778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.250664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.250813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.250841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.255932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.256187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.256217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.261208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.261376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.261405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.266367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.266575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.266604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.271397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.271550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.271578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.276605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.276752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.276781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.281667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.281827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.281854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.286862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.287027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.287055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.291960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.292117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.297114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.297297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.297325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.302356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.302529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.302557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.307793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.307984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.308013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.313036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.313205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.313235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.625 [2024-10-14 13:45:56.318115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1eff6c0) with pdu=0x2000166fef90 00:35:04.625 [2024-10-14 13:45:56.318282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.625 [2024-10-14 13:45:56.318311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:04.625 6234.50 IOPS, 779.31 MiB/s 00:35:04.625 Latency(us) 00:35:04.625 [2024-10-14T11:45:56.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.625 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:04.625 nvme0n1 : 2.00 6230.30 778.79 0.00 0.00 2558.97 1844.72 7573.05 00:35:04.625 [2024-10-14T11:45:56.478Z] =================================================================================================================== 00:35:04.625 [2024-10-14T11:45:56.478Z] Total : 6230.30 778.79 0.00 0.00 2558.97 1844.72 7573.05 00:35:04.625 { 00:35:04.625 "results": [ 00:35:04.625 { 00:35:04.625 "job": "nvme0n1", 00:35:04.625 "core_mask": "0x2", 00:35:04.625 "workload": "randwrite", 00:35:04.625 "status": "finished", 00:35:04.625 "queue_depth": 16, 00:35:04.625 "io_size": 131072, 00:35:04.625 "runtime": 2.004559, 00:35:04.625 "iops": 6230.298035627787, 00:35:04.625 "mibps": 778.7872544534733, 00:35:04.625 "io_failed": 0, 00:35:04.625 "io_timeout": 0, 00:35:04.625 "avg_latency_us": 2558.971939870049, 00:35:04.625 "min_latency_us": 1844.717037037037, 00:35:04.625 "max_latency_us": 7573.0488888888885 00:35:04.625 } 00:35:04.625 ], 00:35:04.625 "core_count": 1 00:35:04.625 } 00:35:04.625 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:04.625 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:04.625 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:04.625 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:04.625 | .driver_specific 00:35:04.625 | .nvme_error 00:35:04.625 | .status_code 00:35:04.625 | .command_transient_transport_error' 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 402 > 0 )) 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 393753 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 393753 ']' 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 393753 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393753 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393753' 00:35:04.884 killing process with pid 393753 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 393753 00:35:04.884 Received shutdown signal, test time was about 2.000000 seconds 00:35:04.884 00:35:04.884 Latency(us) 00:35:04.884 [2024-10-14T11:45:56.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.884 [2024-10-14T11:45:56.737Z] =================================================================================================================== 00:35:04.884 [2024-10-14T11:45:56.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.884 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 393753 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 392417 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 392417 ']' 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 392417 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392417 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392417' 00:35:05.142 killing process with pid 392417 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 392417 00:35:05.142 13:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 392417 00:35:05.401 00:35:05.401 real 0m15.332s 00:35:05.401 user 0m30.282s 00:35:05.401 sys 0m4.439s 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.401 ************************************ 00:35:05.401 END TEST nvmf_digest_error 00:35:05.401 ************************************ 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:05.401 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.402 rmmod nvme_tcp 00:35:05.402 rmmod nvme_fabrics 00:35:05.402 rmmod nvme_keyring 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 392417 ']' 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 392417 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 392417 ']' 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 392417 00:35:05.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (392417) - No such process 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 392417 is not found' 00:35:05.402 Process with pid 392417 is not found 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.402 13:45:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.942 00:35:07.942 real 0m35.574s 00:35:07.942 user 1m1.467s 00:35:07.942 sys 0m10.694s 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.942 ************************************ 00:35:07.942 END TEST nvmf_digest 00:35:07.942 ************************************ 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.942 ************************************ 00:35:07.942 START TEST nvmf_bdevperf 00:35:07.942 ************************************ 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:07.942 * Looking for test storage... 00:35:07.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.942 --rc genhtml_branch_coverage=1 00:35:07.942 --rc genhtml_function_coverage=1 00:35:07.942 --rc genhtml_legend=1 00:35:07.942 --rc geninfo_all_blocks=1 00:35:07.942 --rc geninfo_unexecuted_blocks=1 00:35:07.942 00:35:07.942 ' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.942 --rc genhtml_branch_coverage=1 00:35:07.942 --rc genhtml_function_coverage=1 00:35:07.942 --rc genhtml_legend=1 00:35:07.942 --rc geninfo_all_blocks=1 00:35:07.942 --rc geninfo_unexecuted_blocks=1 00:35:07.942 00:35:07.942 ' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.942 --rc genhtml_branch_coverage=1 00:35:07.942 --rc genhtml_function_coverage=1 00:35:07.942 --rc genhtml_legend=1 00:35:07.942 --rc geninfo_all_blocks=1 00:35:07.942 --rc geninfo_unexecuted_blocks=1 00:35:07.942 00:35:07.942 ' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.942 --rc genhtml_branch_coverage=1 00:35:07.942 --rc genhtml_function_coverage=1 00:35:07.942 --rc genhtml_legend=1 00:35:07.942 --rc geninfo_all_blocks=1 00:35:07.942 --rc geninfo_unexecuted_blocks=1 00:35:07.942 00:35:07.942 ' 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.942 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:07.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.943 13:45:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:09.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:09.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:09.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.901 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:09.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:09.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:35:09.902 00:35:09.902 --- 10.0.0.2 ping statistics --- 00:35:09.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.902 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:35:09.902 00:35:09.902 --- 10.0.0.1 ping statistics --- 00:35:09.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.902 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=396193 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 396193 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 396193 ']' 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:09.902 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:09.902 [2024-10-14 13:46:01.730321] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:09.902 [2024-10-14 13:46:01.730412] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.177 [2024-10-14 13:46:01.797694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:10.177 [2024-10-14 13:46:01.843186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.177 [2024-10-14 13:46:01.843239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.177 [2024-10-14 13:46:01.843264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.177 [2024-10-14 13:46:01.843276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.177 [2024-10-14 13:46:01.843286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.177 [2024-10-14 13:46:01.844665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.177 [2024-10-14 13:46:01.844728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.177 [2024-10-14 13:46:01.844725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 [2024-10-14 13:46:01.975824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.177 13:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 Malloc0 00:35:10.177 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.177 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:10.177 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.177 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.460 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:10.461 [2024-10-14 13:46:02.033367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:10.461 { 00:35:10.461 "params": { 00:35:10.461 "name": "Nvme$subsystem", 00:35:10.461 "trtype": "$TEST_TRANSPORT", 00:35:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.461 "adrfam": "ipv4", 00:35:10.461 "trsvcid": "$NVMF_PORT", 00:35:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.461 "hdgst": ${hdgst:-false}, 00:35:10.461 "ddgst": ${ddgst:-false} 00:35:10.461 }, 00:35:10.461 "method": "bdev_nvme_attach_controller" 00:35:10.461 } 00:35:10.461 EOF 00:35:10.461 )") 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:10.461 13:46:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:10.461 "params": { 00:35:10.461 "name": "Nvme1", 00:35:10.461 "trtype": "tcp", 00:35:10.461 "traddr": "10.0.0.2", 00:35:10.461 "adrfam": "ipv4", 00:35:10.461 "trsvcid": "4420", 00:35:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.461 "hdgst": false, 00:35:10.461 "ddgst": false 00:35:10.461 }, 00:35:10.461 "method": "bdev_nvme_attach_controller" 00:35:10.461 }' 00:35:10.461 [2024-10-14 13:46:02.081990] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:10.461 [2024-10-14 13:46:02.082061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396220 ] 00:35:10.461 [2024-10-14 13:46:02.142893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.461 [2024-10-14 13:46:02.189196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.718 Running I/O for 1 seconds... 00:35:11.652 8270.00 IOPS, 32.30 MiB/s 00:35:11.652 Latency(us) 00:35:11.652 [2024-10-14T11:46:03.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.652 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:11.652 Verification LBA range: start 0x0 length 0x4000 00:35:11.652 Nvme1n1 : 1.01 8361.59 32.66 0.00 0.00 15224.89 1820.44 14272.28 00:35:11.652 [2024-10-14T11:46:03.505Z] =================================================================================================================== 00:35:11.652 [2024-10-14T11:46:03.505Z] Total : 8361.59 32.66 0.00 0.00 15224.89 1820.44 14272.28 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=396486 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:11.910 { 00:35:11.910 "params": { 00:35:11.910 "name": "Nvme$subsystem", 00:35:11.910 "trtype": "$TEST_TRANSPORT", 00:35:11.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.910 "adrfam": "ipv4", 00:35:11.910 "trsvcid": "$NVMF_PORT", 00:35:11.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.910 "hdgst": ${hdgst:-false}, 00:35:11.910 "ddgst": ${ddgst:-false} 00:35:11.910 }, 00:35:11.910 "method": "bdev_nvme_attach_controller" 00:35:11.910 } 00:35:11.910 EOF 00:35:11.910 )") 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:11.910 13:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:11.910 "params": { 00:35:11.910 "name": "Nvme1", 00:35:11.910 "trtype": "tcp", 00:35:11.910 "traddr": "10.0.0.2", 00:35:11.910 "adrfam": "ipv4", 00:35:11.910 "trsvcid": "4420", 00:35:11.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:11.910 "hdgst": false, 00:35:11.910 "ddgst": false 00:35:11.910 }, 00:35:11.910 "method": "bdev_nvme_attach_controller" 00:35:11.910 }' 00:35:11.910 [2024-10-14 13:46:03.722908] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:11.910 [2024-10-14 13:46:03.723001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396486 ] 00:35:12.168 [2024-10-14 13:46:03.783855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.168 [2024-10-14 13:46:03.829015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.425 Running I/O for 15 seconds... 00:35:14.293 8137.00 IOPS, 31.79 MiB/s [2024-10-14T11:46:06.715Z] 8146.00 IOPS, 31.82 MiB/s [2024-10-14T11:46:06.715Z] 13:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 396193 00:35:14.862 13:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:14.862 [2024-10-14 13:46:06.693262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.693972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.693987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.862 [2024-10-14 13:46:06.694521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.862 [2024-10-14 13:46:06.694533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.694987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.694999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.863 [2024-10-14 13:46:06.695700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.863 [2024-10-14 13:46:06.695713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.695977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.695989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.864 [2024-10-14 13:46:06.696900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.864 [2024-10-14 13:46:06.696912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.865 [2024-10-14 13:46:06.696937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.696951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.865 [2024-10-14 13:46:06.696962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.865 [2024-10-14 13:46:06.696992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.865 [2024-10-14 13:46:06.697018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.865 [2024-10-14 13:46:06.697043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b80f70 is same with the state(6) to be set 00:35:14.865 [2024-10-14 13:46:06.697072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:14.865 [2024-10-14 13:46:06.697083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:14.865 [2024-10-14 13:46:06.697093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36344 len:8 PRP1 0x0 PRP2 0x0 00:35:14.865 [2024-10-14 13:46:06.697104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697209] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b80f70 was disconnected and freed. reset controller. 00:35:14.865 [2024-10-14 13:46:06.697285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:14.865 [2024-10-14 13:46:06.697306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:14.865 [2024-10-14 13:46:06.697335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:14.865 [2024-10-14 13:46:06.697363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:14.865 [2024-10-14 13:46:06.697390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:14.865 [2024-10-14 13:46:06.697402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:14.865 [2024-10-14 13:46:06.700573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:14.865 [2024-10-14 13:46:06.700609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:14.865 [2024-10-14 13:46:06.701276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.865 [2024-10-14 13:46:06.701305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:14.865 [2024-10-14 13:46:06.701323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:14.865 [2024-10-14 13:46:06.701565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:14.865 [2024-10-14 13:46:06.701764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:14.865 [2024-10-14 13:46:06.701782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:14.865 [2024-10-14 13:46:06.701797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:14.865 [2024-10-14 13:46:06.705012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:14.865 [2024-10-14 13:46:06.714264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.714668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.714698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.714714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.714946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.715186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.715208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.715221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.718358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.727556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.728054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.728096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.728113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.728351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.728581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.728599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.728611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.731515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.740721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.741159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.741203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.741220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.741459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.741651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.741669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.741681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.744531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.753881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.754219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.754247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.754263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.754484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.754693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.754711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.754722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.757674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.766973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.767390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.767417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.767448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.767681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.767873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.767892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.767904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.770834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.780190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.780567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.780610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.780626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.780877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.781069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.781087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.781099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.784035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.793363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.793833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.793881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.793901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.794174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.794399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.794433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.794445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.797490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.806716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.807088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.807138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.807156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.807411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.807619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.807638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.807650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.810581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.820095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.820579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.820608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.820625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.820866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.821096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.821115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.821138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.824216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.833292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.833678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.833720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.833735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.833983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.834235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.834261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.125 [2024-10-14 13:46:06.834275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.125 [2024-10-14 13:46:06.837302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.125 [2024-10-14 13:46:06.846564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.125 [2024-10-14 13:46:06.847039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.125 [2024-10-14 13:46:06.847093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.125 [2024-10-14 13:46:06.847108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.125 [2024-10-14 13:46:06.847383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.125 [2024-10-14 13:46:06.847591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.125 [2024-10-14 13:46:06.847610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.847622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.850550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.859727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.860067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.860144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.860163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.860403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.860628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.860646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.860658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.863663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.872880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.873247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.873276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.873292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.873534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.873726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.873744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.873755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.876793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.886115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.886515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.886542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.886558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.886792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.887000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.887019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.887030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.889926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.899382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.899762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.899805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.899821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.900088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.900311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.900331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.900343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.903301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.912539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.912949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.913010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.913024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.913281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.913479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.913512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.913525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.916620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.925711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.926013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.926056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.926071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.926322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.926549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.926568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.926579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.929537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.938955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.939332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.939361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.939377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.939613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.939821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.939839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.939851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.942750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.952157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.952506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.952549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.952565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.952778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.953000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.953020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.953032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.956483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.126 [2024-10-14 13:46:06.966086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.126 [2024-10-14 13:46:06.966619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.126 [2024-10-14 13:46:06.966647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.126 [2024-10-14 13:46:06.966663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.126 [2024-10-14 13:46:06.966883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.126 [2024-10-14 13:46:06.967096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.126 [2024-10-14 13:46:06.967138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.126 [2024-10-14 13:46:06.967158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.126 [2024-10-14 13:46:06.970234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.386 [2024-10-14 13:46:06.979733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.386 [2024-10-14 13:46:06.980182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.386 [2024-10-14 13:46:06.980211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.386 [2024-10-14 13:46:06.980227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.386 [2024-10-14 13:46:06.980454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.386 [2024-10-14 13:46:06.980661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.386 [2024-10-14 13:46:06.980679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.386 [2024-10-14 13:46:06.980691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.386 [2024-10-14 13:46:06.983792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.386 [2024-10-14 13:46:06.992733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.386 [2024-10-14 13:46:06.993096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.386 [2024-10-14 13:46:06.993146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.386 [2024-10-14 13:46:06.993164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.386 [2024-10-14 13:46:06.993416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.386 [2024-10-14 13:46:06.993626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.386 [2024-10-14 13:46:06.993644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.386 [2024-10-14 13:46:06.993656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.386 [2024-10-14 13:46:06.996645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.386 [2024-10-14 13:46:07.005814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.386 [2024-10-14 13:46:07.006181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.386 [2024-10-14 13:46:07.006209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.386 [2024-10-14 13:46:07.006234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.386 [2024-10-14 13:46:07.006473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.386 [2024-10-14 13:46:07.006680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.386 [2024-10-14 13:46:07.006699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.386 [2024-10-14 13:46:07.006710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.386 [2024-10-14 13:46:07.009643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.386 [2024-10-14 13:46:07.018899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.386 [2024-10-14 13:46:07.019243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.386 [2024-10-14 13:46:07.019278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.386 [2024-10-14 13:46:07.019294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.386 [2024-10-14 13:46:07.019515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.386 [2024-10-14 13:46:07.019723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.386 [2024-10-14 13:46:07.019742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.386 [2024-10-14 13:46:07.019754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.386 [2024-10-14 13:46:07.022668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.386 [2024-10-14 13:46:07.032003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.386 [2024-10-14 13:46:07.032394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.386 [2024-10-14 13:46:07.032437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.386 [2024-10-14 13:46:07.032453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.386 [2024-10-14 13:46:07.032704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.386 [2024-10-14 13:46:07.032912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.386 [2024-10-14 13:46:07.032930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.386 [2024-10-14 13:46:07.032942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.386 [2024-10-14 13:46:07.035821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.045215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.045528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.045554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.045569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.045782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.045989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.046008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.046020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.048914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.058471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.058850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.058893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.058909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.059173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.059377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.059396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.059408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.062337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.071643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.072036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.072063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.072079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.072346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.072568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.072586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.072598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.075558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.084702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.085194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.085236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.085252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.085501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.085708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.085726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.085738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.088674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.097907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.098311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.098354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.098370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.098634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.098826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.098844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.098856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.101796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.111026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.111484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.111538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.111554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.111814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.112006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.112024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.112036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.114927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 [2024-10-14 13:46:07.124377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.124753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.124780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.387 [2024-10-14 13:46:07.124796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.387 [2024-10-14 13:46:07.125030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.387 [2024-10-14 13:46:07.125255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.387 [2024-10-14 13:46:07.125276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.387 [2024-10-14 13:46:07.125288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.387 [2024-10-14 13:46:07.128228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.387 6933.33 IOPS, 27.08 MiB/s [2024-10-14T11:46:07.240Z] [2024-10-14 13:46:07.138999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.387 [2024-10-14 13:46:07.139458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.387 [2024-10-14 13:46:07.139487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.139504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.139731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.139938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.139956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.139968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.142871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.152271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.152746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.152796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.152817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.153079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.153319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.153340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.153352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.156270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.165372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.165759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.165824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.165865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.166109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.166330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.166349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.166361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.169314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.178663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.179046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.179087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.179101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.179358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.179568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.179586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.179598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.182478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.191792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.192260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.192302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.192319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.192557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.192756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.192774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.192786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.195784] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.205101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.205561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.205589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.205605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.205833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.206071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.206091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.206119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.209704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.218498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.218892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.218919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.218934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.219166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.219394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.219429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.219441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.222375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.388 [2024-10-14 13:46:07.231556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.388 [2024-10-14 13:46:07.231949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.388 [2024-10-14 13:46:07.231975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.388 [2024-10-14 13:46:07.231991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.388 [2024-10-14 13:46:07.232244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.388 [2024-10-14 13:46:07.232472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.388 [2024-10-14 13:46:07.232490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.388 [2024-10-14 13:46:07.232502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.388 [2024-10-14 13:46:07.235273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.244664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.245062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.245091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.245107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.245330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.245590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.245608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.245635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.248689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.257786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.258181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.258209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.258224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.258446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.258670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.258688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.258699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.261638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.270820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.271308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.271349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.271365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.271609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.271801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.271819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.271831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.274716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.283975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.284346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.284388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.284409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.284657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.284865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.284883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.284894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.287788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.297126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.297555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.297581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.297596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.297831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.298039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.298057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.298068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.301008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.310231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.310625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.310652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.310668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.310888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.311112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.311153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.311168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.314065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.323504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.323899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.323926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.323941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.324172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.324376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.324400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.324413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.648 [2024-10-14 13:46:07.327257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.648 [2024-10-14 13:46:07.336582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.648 [2024-10-14 13:46:07.336975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.648 [2024-10-14 13:46:07.337004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.648 [2024-10-14 13:46:07.337020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.648 [2024-10-14 13:46:07.337268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.648 [2024-10-14 13:46:07.337518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.648 [2024-10-14 13:46:07.337536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.648 [2024-10-14 13:46:07.337548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.340477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.349771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.350140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.350167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.350182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.350416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.350623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.350641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.350653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.353559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.362907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.363338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.363366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.363397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.363649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.363840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.363858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.363869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.366792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.376119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.376492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.376520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.376536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.376779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.376986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.377005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.377016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.379830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.389271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.389657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.389699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.389715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.389967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.390202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.390223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.390235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.393138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.402475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.402966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.403007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.403024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.403290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.403525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.403544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.403555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.406453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.415573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.416064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.416105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.416122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.416376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.416602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.416621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.416633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.419526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.428580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.428882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.428922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.428937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.429162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.429383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.429403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.429415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.432333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.441762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.442202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.442245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.442261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.442479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.442686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.442704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.442715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.445616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.455227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.455568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.455595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.455610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.455824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.456053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.456075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.456093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.459478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.468682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.469079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.469106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.469147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.469362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.649 [2024-10-14 13:46:07.469578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.649 [2024-10-14 13:46:07.469597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.649 [2024-10-14 13:46:07.469610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.649 [2024-10-14 13:46:07.472702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.649 [2024-10-14 13:46:07.481962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.649 [2024-10-14 13:46:07.482349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.649 [2024-10-14 13:46:07.482377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.649 [2024-10-14 13:46:07.482393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.649 [2024-10-14 13:46:07.482620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.650 [2024-10-14 13:46:07.482833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.650 [2024-10-14 13:46:07.482852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.650 [2024-10-14 13:46:07.482864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.650 [2024-10-14 13:46:07.485886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.650 [2024-10-14 13:46:07.495257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.650 [2024-10-14 13:46:07.495618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.650 [2024-10-14 13:46:07.495651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.650 [2024-10-14 13:46:07.495683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.650 [2024-10-14 13:46:07.495904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.650 [2024-10-14 13:46:07.496143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.650 [2024-10-14 13:46:07.496163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.650 [2024-10-14 13:46:07.496176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.650 [2024-10-14 13:46:07.499388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.508711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.509110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.509150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.509168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.509409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.509624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.509643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.909 [2024-10-14 13:46:07.509655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.909 [2024-10-14 13:46:07.512670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.522027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.522370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.522397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.522413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.522633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.522837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.522856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.909 [2024-10-14 13:46:07.522868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.909 [2024-10-14 13:46:07.525903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.535296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.535687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.535714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.535730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.535970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.536196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.536231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.909 [2024-10-14 13:46:07.536245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.909 [2024-10-14 13:46:07.539243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.548541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.548913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.548956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.548972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.549254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.549484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.549504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.909 [2024-10-14 13:46:07.549516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.909 [2024-10-14 13:46:07.552525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.561767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.562140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.562168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.562185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.562413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.562628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.562647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.909 [2024-10-14 13:46:07.562659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.909 [2024-10-14 13:46:07.565672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.909 [2024-10-14 13:46:07.574954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.909 [2024-10-14 13:46:07.575318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.909 [2024-10-14 13:46:07.575347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.909 [2024-10-14 13:46:07.575363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.909 [2024-10-14 13:46:07.575593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.909 [2024-10-14 13:46:07.575807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.909 [2024-10-14 13:46:07.575826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.575837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.578897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.588197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.588557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.588585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.588601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.588829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.589043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.589062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.589074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.592081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.601435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.601806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.601834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.601850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.602090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.602324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.602344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.602357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.605358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.614659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.615097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.615125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.615151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.615364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.615599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.615618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.615630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.618602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.627952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.628390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.628419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.628434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.628678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.628876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.628894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.628906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.631930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.641213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.641542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.641582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.641603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.641825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.642039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.642058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.642070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.645069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.654493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.654801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.654844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.654859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.655080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.655324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.655346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.655359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.658345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.667678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.668023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.668051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.668067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.668304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.668541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.668559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.668572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.671544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.680848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.681166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.681209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.681225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.681446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.681661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.681688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.681700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.684677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.694212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.694667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.694695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.694710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.694951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.695182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.695217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.695231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.698312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.707398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.707785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.707813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.707829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.708042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.910 [2024-10-14 13:46:07.708270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.910 [2024-10-14 13:46:07.708291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.910 [2024-10-14 13:46:07.708304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.910 [2024-10-14 13:46:07.711567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.910 [2024-10-14 13:46:07.721088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.910 [2024-10-14 13:46:07.721457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.910 [2024-10-14 13:46:07.721486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.910 [2024-10-14 13:46:07.721501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.910 [2024-10-14 13:46:07.721730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.911 [2024-10-14 13:46:07.721950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.911 [2024-10-14 13:46:07.721969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.911 [2024-10-14 13:46:07.721982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.911 [2024-10-14 13:46:07.725143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.911 [2024-10-14 13:46:07.734381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.911 [2024-10-14 13:46:07.734744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.911 [2024-10-14 13:46:07.734772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.911 [2024-10-14 13:46:07.734788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.911 [2024-10-14 13:46:07.735016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.911 [2024-10-14 13:46:07.735277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.911 [2024-10-14 13:46:07.735298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.911 [2024-10-14 13:46:07.735311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.911 [2024-10-14 13:46:07.738364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.911 [2024-10-14 13:46:07.747619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.911 [2024-10-14 13:46:07.747960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.911 [2024-10-14 13:46:07.747988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.911 [2024-10-14 13:46:07.748004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.911 [2024-10-14 13:46:07.748244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.911 [2024-10-14 13:46:07.748483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.911 [2024-10-14 13:46:07.748502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.911 [2024-10-14 13:46:07.748513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:15.911 [2024-10-14 13:46:07.751488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:15.911 [2024-10-14 13:46:07.761257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:15.911 [2024-10-14 13:46:07.761580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.911 [2024-10-14 13:46:07.761608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:15.911 [2024-10-14 13:46:07.761624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:15.911 [2024-10-14 13:46:07.761853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:15.911 [2024-10-14 13:46:07.762084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:15.911 [2024-10-14 13:46:07.762102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:15.911 [2024-10-14 13:46:07.762114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.170 [2024-10-14 13:46:07.765352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.774447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.774818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.774846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.774867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.775108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.775356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.775378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.775390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.778405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.787677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.788059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.788087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.788103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.788325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.788546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.788565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.788577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.791549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.800892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.801252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.801279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.801295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.801522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.801735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.801754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.801766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.804825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.814092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.814519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.814548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.814563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.814806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.815003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.815026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.815039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.818091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.827382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.827695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.827736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.827752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.827972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.828237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.828259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.828272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.831266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.840567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.841005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.841033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.841049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.841286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.841525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.841545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.841556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.844534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.853867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.854228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.854256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.854272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.854507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.854720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.854739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.854751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.857740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.867040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.867486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.867515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.867531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.867761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.867975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.867994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.868005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.871012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.880394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.880738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.880765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.880780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.881001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.881263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.881285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.881298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.884295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.893719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.894034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.894077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.894093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.894330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.171 [2024-10-14 13:46:07.894569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.171 [2024-10-14 13:46:07.894588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.171 [2024-10-14 13:46:07.894600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.171 [2024-10-14 13:46:07.897611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.171 [2024-10-14 13:46:07.906891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.171 [2024-10-14 13:46:07.907265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.171 [2024-10-14 13:46:07.907293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.171 [2024-10-14 13:46:07.907309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.171 [2024-10-14 13:46:07.907541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.907755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.907774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.907786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.910770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.920215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.920627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.920669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.920685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.920915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.921157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.921193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.921207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.924277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.933557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.933927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.933954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.933971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.934207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.934433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.934453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.934465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.937453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.946823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.947218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.947247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.947262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.947490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.947704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.947723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.947740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.950769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.960124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.960534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.960562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.960578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.960791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.961035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.961056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.961069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.964405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.973611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.973986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.974030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.974046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.974285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.974529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.974548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.974560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.977650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:07.987050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:07.987416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:07.987444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:07.987475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:07.987697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:07.987911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:07.987930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:07.987942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:07.990947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:08.000322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:08.000713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:08.000746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:08.000762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:08.001003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:08.001245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:08.001265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:08.001278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:08.004399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.172 [2024-10-14 13:46:08.013531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.172 [2024-10-14 13:46:08.013881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.172 [2024-10-14 13:46:08.013910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.172 [2024-10-14 13:46:08.013926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.172 [2024-10-14 13:46:08.014163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.172 [2024-10-14 13:46:08.014373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.172 [2024-10-14 13:46:08.014393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.172 [2024-10-14 13:46:08.014405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.172 [2024-10-14 13:46:08.017584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.027151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.027586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.027618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.027634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.027847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.028061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.028080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.028091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.031445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.040492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.040862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.040889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.040905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.041153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.041370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.041390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.041402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.044400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.053750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.054074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.054101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.054116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.054371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.054602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.054621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.054634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.057666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.067025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.067385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.067428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.067445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.067665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.067878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.067897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.067909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.070927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.080253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.080600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.080641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.080657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.080877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.081092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.081111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.081122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.084102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.093685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.094184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.094227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.094243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.094471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.094685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.094704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.094716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.097766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.106960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.107390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.107419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.107435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.107702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.107899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.107918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.107930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.110872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.120252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.120586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.120627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.120643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.120864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.121077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.121096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.121108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.124109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 [2024-10-14 13:46:08.133614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.133987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.134029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.134050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.134308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.134544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.134563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.134574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.137552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.432 5200.00 IOPS, 20.31 MiB/s [2024-10-14T11:46:08.285Z] [2024-10-14 13:46:08.146971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.432 [2024-10-14 13:46:08.147311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.432 [2024-10-14 13:46:08.147354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.432 [2024-10-14 13:46:08.147370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.432 [2024-10-14 13:46:08.147599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.432 [2024-10-14 13:46:08.147814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.432 [2024-10-14 13:46:08.147833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.432 [2024-10-14 13:46:08.147845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.432 [2024-10-14 13:46:08.150862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.160306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.160669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.160698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.160715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.160943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.161183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.161204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.161217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.164210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.173555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.174051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.174093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.174111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.174350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.174583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.174608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.174621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.177670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.186791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.187160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.187207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.187222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.187455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.187669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.187688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.187700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.190683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.200211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.200652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.200680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.200696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.200938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.201161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.201196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.201210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.204309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.213432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.213827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.213856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.213872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.214085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.214340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.214362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.214375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.217732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.226854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.227280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.227308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.227324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.227552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.227772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.227792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.227804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.230960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.240139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.240554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.240597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.240612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.240880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.241078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.241096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.241108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.244207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.253291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.253672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.253702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.253718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.253959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.254198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.254219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.254232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.257254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.266635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.267139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.267182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.267204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.267433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.267647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.267666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.267679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.270697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.433 [2024-10-14 13:46:08.280236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.433 [2024-10-14 13:46:08.280580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.433 [2024-10-14 13:46:08.280608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.433 [2024-10-14 13:46:08.280625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.433 [2024-10-14 13:46:08.280853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.433 [2024-10-14 13:46:08.281087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.433 [2024-10-14 13:46:08.281121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.433 [2024-10-14 13:46:08.281144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.433 [2024-10-14 13:46:08.284589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.293734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.294151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.294179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.294195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.294424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.294637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.294655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.294667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.297704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.307037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.307456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.307484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.307500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.307744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.307957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.307981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.307994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.310967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.320409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.320731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.320758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.320774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.320995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.321252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.321273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.321285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.324388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.333665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.334010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.334038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.334054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.334277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.334528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.334548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.334559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.337533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.346969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.347346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.347374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.347389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.347633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.347831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.347850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.347862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.350877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.360205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.360543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.360585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.360600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.360821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.361034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.361053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.361066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.364083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.373548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.373953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.373981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.373997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.374221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.374462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.374495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.374507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.693 [2024-10-14 13:46:08.377507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.693 [2024-10-14 13:46:08.386760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.693 [2024-10-14 13:46:08.387195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.693 [2024-10-14 13:46:08.387223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.693 [2024-10-14 13:46:08.387239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.693 [2024-10-14 13:46:08.387481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.693 [2024-10-14 13:46:08.387679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.693 [2024-10-14 13:46:08.387697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.693 [2024-10-14 13:46:08.387709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.390724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.399930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.400347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.400375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.400391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.400627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.400842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.400860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.400872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.403908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.413202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.413599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.413642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.413658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.413909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.414122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.414151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.414164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.417176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.426455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.426845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.426872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.426887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.427107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.427342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.427363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.427376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.430366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.439654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.440030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.440058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.440074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.440311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.440549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.440568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.440585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.443610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.452911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.453286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.453314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.453330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.453557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.453789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.453808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.453820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.456866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.466186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.466605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.466633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.466649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.466862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.467110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.467138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.467153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.470492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.479564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.479935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.479977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.479993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.480231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.480464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.480498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.480510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.483555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.492835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.493178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.493211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.493228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.493456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.493669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.493688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.493700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.496752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.506046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.506426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.506454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.506470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.506711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.506908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.506927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.506938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.509944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.519399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.519884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.519925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.519942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.520207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.520419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.694 [2024-10-14 13:46:08.520454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.694 [2024-10-14 13:46:08.520467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.694 [2024-10-14 13:46:08.523522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.694 [2024-10-14 13:46:08.532613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.694 [2024-10-14 13:46:08.532960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.694 [2024-10-14 13:46:08.532988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.694 [2024-10-14 13:46:08.533004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.694 [2024-10-14 13:46:08.533227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.694 [2024-10-14 13:46:08.533486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.695 [2024-10-14 13:46:08.533506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.695 [2024-10-14 13:46:08.533518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.695 [2024-10-14 13:46:08.536506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.695 [2024-10-14 13:46:08.546213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.695 [2024-10-14 13:46:08.546561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.695 [2024-10-14 13:46:08.546588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.695 [2024-10-14 13:46:08.546604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.695 [2024-10-14 13:46:08.546817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.547065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.547100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.547113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.550180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.559421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.559870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.559898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.559914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.560167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.560372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.560391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.560403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.563420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.572713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.573104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.573153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.573169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.573383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.573615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.573634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.573645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.576624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.585948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.586342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.586371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.586387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.586626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.586824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.586842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.586854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.589861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.599204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.599580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.599608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.599624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.599851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.600064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.600083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.600095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.603092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.612458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.612833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.612861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.612877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.613118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.613331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.613350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.613363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.616317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.625665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.626015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.626042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.626063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.626314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.626551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.626570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.626582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.629557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.638841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.639261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.954 [2024-10-14 13:46:08.639289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.954 [2024-10-14 13:46:08.639305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.954 [2024-10-14 13:46:08.639532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.954 [2024-10-14 13:46:08.639746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.954 [2024-10-14 13:46:08.639765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.954 [2024-10-14 13:46:08.639777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.954 [2024-10-14 13:46:08.642759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.954 [2024-10-14 13:46:08.652038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.954 [2024-10-14 13:46:08.652474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.652501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.652517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.652737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.652965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.652984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.652996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.655947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.665199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.665599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.665627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.665643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.665884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.666081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.666105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.666118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.668976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.678381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.678800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.678827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.678843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.679082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.679311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.679331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.679344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.682218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.691468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.691798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.691825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.691840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.692060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.692301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.692322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.692334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.695208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.704686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.705008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.705034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.705049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.705310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.705541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.705559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.705571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.708506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.717756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.718141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.718170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.718186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.718399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.718653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.718671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.718683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.722270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.730968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.731362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.731406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.731421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.731657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.731864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.731883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.731894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.734904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.744182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.744550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.744577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.744592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.744815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.745024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.745042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.745053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.747920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.757281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.757776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.757818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.757835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.758103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.758328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.758348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.758360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.761267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.770419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.770909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.770951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.770967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.771243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.771449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.771468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.771480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.774385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.955 [2024-10-14 13:46:08.783690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.955 [2024-10-14 13:46:08.783998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.955 [2024-10-14 13:46:08.784066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.955 [2024-10-14 13:46:08.784103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.955 [2024-10-14 13:46:08.784353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.955 [2024-10-14 13:46:08.784585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.955 [2024-10-14 13:46:08.784604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.955 [2024-10-14 13:46:08.784616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.955 [2024-10-14 13:46:08.787588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:16.956 [2024-10-14 13:46:08.796743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:16.956 [2024-10-14 13:46:08.797102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.956 [2024-10-14 13:46:08.797136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:16.956 [2024-10-14 13:46:08.797169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:16.956 [2024-10-14 13:46:08.797408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:16.956 [2024-10-14 13:46:08.797617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:16.956 [2024-10-14 13:46:08.797635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:16.956 [2024-10-14 13:46:08.797652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:16.956 [2024-10-14 13:46:08.800543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.215 [2024-10-14 13:46:08.810265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.215 [2024-10-14 13:46:08.810646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.215 [2024-10-14 13:46:08.810673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.215 [2024-10-14 13:46:08.810688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.215 [2024-10-14 13:46:08.810923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.215 [2024-10-14 13:46:08.811157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.215 [2024-10-14 13:46:08.811191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.215 [2024-10-14 13:46:08.811204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.215 [2024-10-14 13:46:08.814430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.215 [2024-10-14 13:46:08.823418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.215 [2024-10-14 13:46:08.823759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.215 [2024-10-14 13:46:08.823786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.215 [2024-10-14 13:46:08.823801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.215 [2024-10-14 13:46:08.824016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.215 [2024-10-14 13:46:08.824253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.215 [2024-10-14 13:46:08.824272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.215 [2024-10-14 13:46:08.824284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.215 [2024-10-14 13:46:08.827071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.215 [2024-10-14 13:46:08.836474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.215 [2024-10-14 13:46:08.836955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.215 [2024-10-14 13:46:08.837006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.215 [2024-10-14 13:46:08.837021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.215 [2024-10-14 13:46:08.837302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.215 [2024-10-14 13:46:08.837538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.215 [2024-10-14 13:46:08.837556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.215 [2024-10-14 13:46:08.837567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.215 [2024-10-14 13:46:08.840477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.215 [2024-10-14 13:46:08.849484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.215 [2024-10-14 13:46:08.849969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.215 [2024-10-14 13:46:08.850022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.215 [2024-10-14 13:46:08.850037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.215 [2024-10-14 13:46:08.850310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.215 [2024-10-14 13:46:08.850528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.215 [2024-10-14 13:46:08.850547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.215 [2024-10-14 13:46:08.850559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.215 [2024-10-14 13:46:08.853448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.215 [2024-10-14 13:46:08.862560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.215 [2024-10-14 13:46:08.863030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.215 [2024-10-14 13:46:08.863056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.215 [2024-10-14 13:46:08.863086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.215 [2024-10-14 13:46:08.863336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.215 [2024-10-14 13:46:08.863568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.215 [2024-10-14 13:46:08.863587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.863598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.866489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.875691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.876160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.876206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.876221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.876466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.876672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.876690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.876702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.879556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.888735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.889100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.889135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.889169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.889399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.889626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.889645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.889656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.892547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.902017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.902434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.902474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.902489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.902706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.902914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.902932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.902943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.905910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.915153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.915520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.915563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.915578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.915830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.916037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.916055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.916066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.918992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.928152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.928527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.928555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.928571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.928810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.929001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.929019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.929036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.931849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.941177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.941543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.941586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.941602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.941851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.942043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.942061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.942072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.944986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.954349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.954678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.954706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.954721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.954951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.955190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.955211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.955224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.958101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.967327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.967627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.967668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.967683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.967897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.968137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.968157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.968170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.971690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.980597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.980960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.981007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.981024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.981290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.981522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.981540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.981552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.984466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:08.993604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:08.993966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:08.994008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:08.994022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:08.994303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:08.994534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.216 [2024-10-14 13:46:08.994552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.216 [2024-10-14 13:46:08.994564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.216 [2024-10-14 13:46:08.997490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.216 [2024-10-14 13:46:09.006746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.216 [2024-10-14 13:46:09.007107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.216 [2024-10-14 13:46:09.007142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.216 [2024-10-14 13:46:09.007160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.216 [2024-10-14 13:46:09.007394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.216 [2024-10-14 13:46:09.007601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.217 [2024-10-14 13:46:09.007619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.217 [2024-10-14 13:46:09.007631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.217 [2024-10-14 13:46:09.010448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.217 [2024-10-14 13:46:09.019984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.217 [2024-10-14 13:46:09.020356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.217 [2024-10-14 13:46:09.020398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.217 [2024-10-14 13:46:09.020413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.217 [2024-10-14 13:46:09.020659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.217 [2024-10-14 13:46:09.020874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.217 [2024-10-14 13:46:09.020893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.217 [2024-10-14 13:46:09.020904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.217 [2024-10-14 13:46:09.023715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.217 [2024-10-14 13:46:09.033024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.217 [2024-10-14 13:46:09.033397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.217 [2024-10-14 13:46:09.033425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.217 [2024-10-14 13:46:09.033441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.217 [2024-10-14 13:46:09.033669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.217 [2024-10-14 13:46:09.033876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.217 [2024-10-14 13:46:09.033895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.217 [2024-10-14 13:46:09.033906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.217 [2024-10-14 13:46:09.036766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.217 [2024-10-14 13:46:09.046112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.217 [2024-10-14 13:46:09.046450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.217 [2024-10-14 13:46:09.046477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.217 [2024-10-14 13:46:09.046492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.217 [2024-10-14 13:46:09.046712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.217 [2024-10-14 13:46:09.046920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.217 [2024-10-14 13:46:09.046938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.217 [2024-10-14 13:46:09.046950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.217 [2024-10-14 13:46:09.049763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.217 [2024-10-14 13:46:09.059113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.217 [2024-10-14 13:46:09.059530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.217 [2024-10-14 13:46:09.059581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.217 [2024-10-14 13:46:09.059597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.217 [2024-10-14 13:46:09.059856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.217 [2024-10-14 13:46:09.060048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.217 [2024-10-14 13:46:09.060066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.217 [2024-10-14 13:46:09.060077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.217 [2024-10-14 13:46:09.063073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.477 [2024-10-14 13:46:09.072276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.477 [2024-10-14 13:46:09.072753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.477 [2024-10-14 13:46:09.072803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.477 [2024-10-14 13:46:09.072818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.477 [2024-10-14 13:46:09.073064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.477 [2024-10-14 13:46:09.073318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.477 [2024-10-14 13:46:09.073339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.477 [2024-10-14 13:46:09.073366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.477 [2024-10-14 13:46:09.076591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.477 [2024-10-14 13:46:09.085418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.477 [2024-10-14 13:46:09.085796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.477 [2024-10-14 13:46:09.085823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.477 [2024-10-14 13:46:09.085838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.477 [2024-10-14 13:46:09.086073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.477 [2024-10-14 13:46:09.086300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.477 [2024-10-14 13:46:09.086320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.477 [2024-10-14 13:46:09.086333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.477 [2024-10-14 13:46:09.089235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.477 [2024-10-14 13:46:09.098520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.477 [2024-10-14 13:46:09.098885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.477 [2024-10-14 13:46:09.098928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.477 [2024-10-14 13:46:09.098944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.477 [2024-10-14 13:46:09.099220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.477 [2024-10-14 13:46:09.099425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.477 [2024-10-14 13:46:09.099459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.477 [2024-10-14 13:46:09.099471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.477 [2024-10-14 13:46:09.102364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.477 [2024-10-14 13:46:09.111658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.477 [2024-10-14 13:46:09.112023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.477 [2024-10-14 13:46:09.112066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.477 [2024-10-14 13:46:09.112087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.477 [2024-10-14 13:46:09.112337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.477 [2024-10-14 13:46:09.112571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.477 [2024-10-14 13:46:09.112589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.477 [2024-10-14 13:46:09.112601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.477 [2024-10-14 13:46:09.115491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.477 [2024-10-14 13:46:09.124990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.477 [2024-10-14 13:46:09.125326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.477 [2024-10-14 13:46:09.125368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.477 [2024-10-14 13:46:09.125386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.477 [2024-10-14 13:46:09.125613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.477 [2024-10-14 13:46:09.125839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.125858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.125870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.128702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.138172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.138537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.138580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.138597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.138833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.139039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.139058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.139070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 4160.00 IOPS, 16.25 MiB/s [2024-10-14T11:46:09.331Z] [2024-10-14 13:46:09.143883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.151320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.151827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.151869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.151886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.152121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.152329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.152352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.152365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.155307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.164920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.165304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.165355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.165371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.165610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.165821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.165839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.165851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.168933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.178659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.179046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.179073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.179089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.179311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.179540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.179560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.179573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.182819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.191897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.192223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.192251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.192268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.192495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.192703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.192721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.192733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.195760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.205233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.205638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.205680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.205696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.205947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.206189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.206210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.206222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.209204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.218458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.218871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.218920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.218941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.219241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.219460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.219496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.219509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.222993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.231779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.232081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.232120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.232150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.232379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.232610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.232629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.232641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.235607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.244947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.245372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.245443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.245458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.245708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.245922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.245940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.245952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.248887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.258084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.258528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.258581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.258597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.258823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.478 [2024-10-14 13:46:09.259015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.478 [2024-10-14 13:46:09.259033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.478 [2024-10-14 13:46:09.259044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.478 [2024-10-14 13:46:09.261820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.478 [2024-10-14 13:46:09.271250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.478 [2024-10-14 13:46:09.271703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.478 [2024-10-14 13:46:09.271730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.478 [2024-10-14 13:46:09.271745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.478 [2024-10-14 13:46:09.271981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.479 [2024-10-14 13:46:09.272216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.479 [2024-10-14 13:46:09.272236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.479 [2024-10-14 13:46:09.272247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.479 [2024-10-14 13:46:09.275079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.479 [2024-10-14 13:46:09.284547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.479 [2024-10-14 13:46:09.284958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.479 [2024-10-14 13:46:09.284999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.479 [2024-10-14 13:46:09.285015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.479 [2024-10-14 13:46:09.285265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.479 [2024-10-14 13:46:09.285470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.479 [2024-10-14 13:46:09.285489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.479 [2024-10-14 13:46:09.285506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.479 [2024-10-14 13:46:09.288397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.479 [2024-10-14 13:46:09.297772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.479 [2024-10-14 13:46:09.298250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.479 [2024-10-14 13:46:09.298279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.479 [2024-10-14 13:46:09.298295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.479 [2024-10-14 13:46:09.298534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.479 [2024-10-14 13:46:09.298747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.479 [2024-10-14 13:46:09.298766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.479 [2024-10-14 13:46:09.298778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.479 [2024-10-14 13:46:09.301772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.479 [2024-10-14 13:46:09.310931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.479 [2024-10-14 13:46:09.311320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.479 [2024-10-14 13:46:09.311363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.479 [2024-10-14 13:46:09.311378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.479 [2024-10-14 13:46:09.311641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.479 [2024-10-14 13:46:09.311832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.479 [2024-10-14 13:46:09.311850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.479 [2024-10-14 13:46:09.311861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.479 [2024-10-14 13:46:09.314790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.479 [2024-10-14 13:46:09.324197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.479 [2024-10-14 13:46:09.324597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.479 [2024-10-14 13:46:09.324640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.479 [2024-10-14 13:46:09.324656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.479 [2024-10-14 13:46:09.324909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.479 [2024-10-14 13:46:09.325139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.479 [2024-10-14 13:46:09.325159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.479 [2024-10-14 13:46:09.325171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.479 [2024-10-14 13:46:09.328335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.738 [2024-10-14 13:46:09.337904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.738 [2024-10-14 13:46:09.338334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.738 [2024-10-14 13:46:09.338363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.738 [2024-10-14 13:46:09.338379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.738 [2024-10-14 13:46:09.338632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.738 [2024-10-14 13:46:09.338825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.738 [2024-10-14 13:46:09.338843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.738 [2024-10-14 13:46:09.338854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.738 [2024-10-14 13:46:09.341934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.738 [2024-10-14 13:46:09.351125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.738 [2024-10-14 13:46:09.351514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.738 [2024-10-14 13:46:09.351541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.351557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.351791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.351983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.352001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.352013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.354944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.364295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.364743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.364785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.364801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.365040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.365303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.365324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.365337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.368255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.377513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.377813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.377853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.377869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.378089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.378328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.378348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.378360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.381281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.390775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.391100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.391151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.391168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.391423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.391647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.391665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.391676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.394679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.403924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.404360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.404389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.404405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.404630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.404837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.404855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.404867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.407806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.417019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.417429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.417471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.417486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.417714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.417923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.417941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.417958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.420886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.430077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.430514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.430557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.430573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.430812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.431004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.431022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.431034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.433994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.443263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.443720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.443762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.443779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.444019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.444273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.444293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.444305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.447111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.456482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.456873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.456900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.456916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.457151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.457367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.457386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.457398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.460188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.469497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.469856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.469890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.469906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.470143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.470385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.470406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.470419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.473890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.482752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.739 [2024-10-14 13:46:09.483178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.739 [2024-10-14 13:46:09.483207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.739 [2024-10-14 13:46:09.483223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.739 [2024-10-14 13:46:09.483464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.739 [2024-10-14 13:46:09.483656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.739 [2024-10-14 13:46:09.483674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.739 [2024-10-14 13:46:09.483685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.739 [2024-10-14 13:46:09.486641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.739 [2024-10-14 13:46:09.495882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.496217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.496244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.496259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.496480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.496707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.496725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.496736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.499651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.508940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.509263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.509290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.509305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.509519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.509732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.509751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.509762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.512661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.522102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.522507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.522549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.522564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.522817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.523028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.523047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.523058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.525974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.535197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.535561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.535588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.535604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.535837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.536044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.536062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.536074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.539040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.548304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.548645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.548670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.548685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.548899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.549107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.549125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.549163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.552065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.561403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.561765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.561792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.561822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.562069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.562308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.562329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.562341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.565247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.574474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.574889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.574916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.574946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.575176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.575380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.575399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.575411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.578300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:17.740 [2024-10-14 13:46:09.587534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:17.740 [2024-10-14 13:46:09.587897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:17.740 [2024-10-14 13:46:09.587938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:17.740 [2024-10-14 13:46:09.587953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:17.740 [2024-10-14 13:46:09.588232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:17.740 [2024-10-14 13:46:09.588451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:17.740 [2024-10-14 13:46:09.588471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:17.740 [2024-10-14 13:46:09.588484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:17.740 [2024-10-14 13:46:09.591814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.600875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.601284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.601325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.601346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.601607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.601799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.601817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.601829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.604642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.613995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.614367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.614409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.614424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.614671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.614863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.614881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.614893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.617706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.627150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.627520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.627562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.627577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.627821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.628012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.628030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.628042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.630856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.640228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.640717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.640744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.640775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.641025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.641263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.641288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.641301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.644193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.653364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.653690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.653718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.653734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.653954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.654204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.654225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.654237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.657183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.666808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.667244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.000 [2024-10-14 13:46:09.667279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.000 [2024-10-14 13:46:09.667296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.000 [2024-10-14 13:46:09.667524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.000 [2024-10-14 13:46:09.667738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.000 [2024-10-14 13:46:09.667757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.000 [2024-10-14 13:46:09.667769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.000 [2024-10-14 13:46:09.670879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.000 [2024-10-14 13:46:09.680235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.000 [2024-10-14 13:46:09.680571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.680597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.680628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.680850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.681061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.681079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.681091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.684139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 396193 Killed "${NVMF_APP[@]}" "$@" 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=397154 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 397154 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 397154 ']' 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:18.001 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 [2024-10-14 13:46:09.693618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.693931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.693958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.693973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.694203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.694436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.694455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.694467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.697595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.707066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.707431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.707476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.707493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.707733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.707930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.707949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.707961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.711048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.720383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.720755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.720783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.720799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.721012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.721241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.721262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.721276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.724482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.733753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.734137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.734166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.734183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.734414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.734631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.734650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.734662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.737146] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:18.001 [2024-10-14 13:46:09.737219] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.001 [2024-10-14 13:46:09.737724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.747115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.747582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.747609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.747624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.747844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.748058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.748076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.748088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.751184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.760332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.760703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.760730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.760745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.760960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.761203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.761224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.761237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.764246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.773565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.773936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.773978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.773994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.774245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.774465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.774499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.774513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.777657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.786920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.787297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.001 [2024-10-14 13:46:09.787326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.001 [2024-10-14 13:46:09.787342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.001 [2024-10-14 13:46:09.787584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.001 [2024-10-14 13:46:09.787782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.001 [2024-10-14 13:46:09.787800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.001 [2024-10-14 13:46:09.787812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.001 [2024-10-14 13:46:09.790790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.001 [2024-10-14 13:46:09.800226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.001 [2024-10-14 13:46:09.800658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.002 [2024-10-14 13:46:09.800701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.002 [2024-10-14 13:46:09.800716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.002 [2024-10-14 13:46:09.800987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.002 [2024-10-14 13:46:09.801213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.002 [2024-10-14 13:46:09.801234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.002 [2024-10-14 13:46:09.801246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.002 [2024-10-14 13:46:09.803577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:18.002 [2024-10-14 13:46:09.804274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.002 [2024-10-14 13:46:09.813519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.002 [2024-10-14 13:46:09.814026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.002 [2024-10-14 13:46:09.814078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.002 [2024-10-14 13:46:09.814097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.002 [2024-10-14 13:46:09.814359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.002 [2024-10-14 13:46:09.814599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.002 [2024-10-14 13:46:09.814619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.002 [2024-10-14 13:46:09.814634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.002 [2024-10-14 13:46:09.817639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.002 [2024-10-14 13:46:09.826917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.002 [2024-10-14 13:46:09.827335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.002 [2024-10-14 13:46:09.827368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.002 [2024-10-14 13:46:09.827386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.002 [2024-10-14 13:46:09.827632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.002 [2024-10-14 13:46:09.827838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.002 [2024-10-14 13:46:09.827858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.002 [2024-10-14 13:46:09.827873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.002 [2024-10-14 13:46:09.830932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.002 [2024-10-14 13:46:09.840199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.002 [2024-10-14 13:46:09.840578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.002 [2024-10-14 13:46:09.840606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.002 [2024-10-14 13:46:09.840622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.002 [2024-10-14 13:46:09.840848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.002 [2024-10-14 13:46:09.841062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.002 [2024-10-14 13:46:09.841093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.002 [2024-10-14 13:46:09.841121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.002 [2024-10-14 13:46:09.844093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.002 [2024-10-14 13:46:09.849571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.002 [2024-10-14 13:46:09.849604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.002 [2024-10-14 13:46:09.849634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.002 [2024-10-14 13:46:09.849647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.002 [2024-10-14 13:46:09.849658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.002 [2024-10-14 13:46:09.851095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.002 [2024-10-14 13:46:09.851161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:18.002 [2024-10-14 13:46:09.851165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.002 [2024-10-14 13:46:09.853842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.261 [2024-10-14 13:46:09.854185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.261 [2024-10-14 13:46:09.854215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.261 [2024-10-14 13:46:09.854234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.261 [2024-10-14 13:46:09.854452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.261 [2024-10-14 13:46:09.854679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.261 [2024-10-14 13:46:09.854700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.261 [2024-10-14 13:46:09.854715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.261 [2024-10-14 13:46:09.857985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.261 [2024-10-14 13:46:09.867410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.261 [2024-10-14 13:46:09.867910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.261 [2024-10-14 13:46:09.867948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.261 [2024-10-14 13:46:09.867969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.261 [2024-10-14 13:46:09.868204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.261 [2024-10-14 13:46:09.868443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.261 [2024-10-14 13:46:09.868464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.261 [2024-10-14 13:46:09.868479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.261 [2024-10-14 13:46:09.871642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.261 [2024-10-14 13:46:09.881098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.261 [2024-10-14 13:46:09.881596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.261 [2024-10-14 13:46:09.881633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.261 [2024-10-14 13:46:09.881665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.261 [2024-10-14 13:46:09.881904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.261 [2024-10-14 13:46:09.882121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.261 [2024-10-14 13:46:09.882152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.261 [2024-10-14 13:46:09.882169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.885353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.894720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.895228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.895266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.895287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.895524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.895740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.895761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.895778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.898988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.908413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.908870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.908904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.908924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.909157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.909380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.909401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.909433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.912631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.922007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.922510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.922550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.922570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.922794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.923031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.923063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.923080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.926467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.935664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.936046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.936078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.936095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.936321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.936552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.936573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.936588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.939801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.949196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.949509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.949538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.949554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.949767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.949985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.950005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.950019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.953258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.962867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.963227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.963255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.963271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.963485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.963703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.963723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.963737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.262 [2024-10-14 13:46:09.966972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:09.976474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.976841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.976869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.976886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.977099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.977325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.977346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.977360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.980575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.262 [2024-10-14 13:46:09.988860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.262 [2024-10-14 13:46:09.990087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:09.990419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:09.990447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:09.990463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:09.990677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:09.990903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:09.990923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:09.990936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:09.994181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.262 13:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.262 [2024-10-14 13:46:10.003698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:10.004141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:10.004173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:10.004201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.262 [2024-10-14 13:46:10.004422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.262 [2024-10-14 13:46:10.004644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.262 [2024-10-14 13:46:10.004665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.262 [2024-10-14 13:46:10.004681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.262 [2024-10-14 13:46:10.008245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.262 [2024-10-14 13:46:10.017345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.262 [2024-10-14 13:46:10.017767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.262 [2024-10-14 13:46:10.017797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.262 [2024-10-14 13:46:10.017814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.263 [2024-10-14 13:46:10.018028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.263 [2024-10-14 13:46:10.018290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.263 [2024-10-14 13:46:10.018314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.263 [2024-10-14 13:46:10.018331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.263 [2024-10-14 13:46:10.021617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.263 Malloc0 00:35:18.263 [2024-10-14 13:46:10.031010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:18.263 [2024-10-14 13:46:10.031497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.263 [2024-10-14 13:46:10.031529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.263 [2024-10-14 13:46:10.031550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 [2024-10-14 13:46:10.031771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.263 [2024-10-14 13:46:10.031994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.263 [2024-10-14 13:46:10.032017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.263 [2024-10-14 13:46:10.032034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.263 [2024-10-14 13:46:10.035321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 [2024-10-14 13:46:10.044684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.263 [2024-10-14 13:46:10.045063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:18.263 [2024-10-14 13:46:10.045091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6de00 with addr=10.0.0.2, port=4420 00:35:18.263 [2024-10-14 13:46:10.045107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6de00 is same with the state(6) to be set 00:35:18.263 [2024-10-14 13:46:10.045333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6de00 (9): Bad file descriptor 00:35:18.263 [2024-10-14 13:46:10.045562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.263 [2024-10-14 13:46:10.045583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:18.263 [2024-10-14 13:46:10.045596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 [2024-10-14 13:46:10.048908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:18.263 [2024-10-14 13:46:10.050756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.263 13:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 396486 00:35:18.263 [2024-10-14 13:46:10.058433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:18.521 3466.67 IOPS, 13.54 MiB/s [2024-10-14T11:46:10.374Z] [2024-10-14 13:46:10.249935] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:20.388 4047.14 IOPS, 15.81 MiB/s [2024-10-14T11:46:13.174Z] 4578.88 IOPS, 17.89 MiB/s [2024-10-14T11:46:14.547Z] 4995.78 IOPS, 19.51 MiB/s [2024-10-14T11:46:15.481Z] 5331.40 IOPS, 20.83 MiB/s [2024-10-14T11:46:16.414Z] 5606.45 IOPS, 21.90 MiB/s [2024-10-14T11:46:17.348Z] 5833.25 IOPS, 22.79 MiB/s [2024-10-14T11:46:18.281Z] 6030.15 IOPS, 23.56 MiB/s [2024-10-14T11:46:19.214Z] 6197.50 IOPS, 24.21 MiB/s [2024-10-14T11:46:19.214Z] 6345.33 IOPS, 24.79 MiB/s 00:35:27.361 Latency(us) 00:35:27.361 [2024-10-14T11:46:19.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:27.361 Verification LBA range: start 0x0 length 0x4000 00:35:27.361 Nvme1n1 : 15.05 6325.88 24.71 10308.00 0.00 7651.47 570.41 41943.04 00:35:27.361 [2024-10-14T11:46:19.214Z] =================================================================================================================== 00:35:27.361 [2024-10-14T11:46:19.214Z] Total : 6325.88 24.71 10308.00 0.00 7651.47 570.41 41943.04 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.619 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.619 rmmod nvme_tcp 00:35:27.619 rmmod nvme_fabrics 00:35:27.619 rmmod nvme_keyring 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 397154 ']' 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 397154 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 397154 ']' 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 397154 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 397154 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 397154' 00:35:27.877 killing process with pid 397154 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 397154 00:35:27.877 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 397154 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.137 13:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:30.038 00:35:30.038 real 0m22.547s 00:35:30.038 user 1m0.492s 00:35:30.038 sys 0m4.039s 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.038 ************************************ 00:35:30.038 END TEST nvmf_bdevperf 00:35:30.038 ************************************ 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.038 ************************************ 00:35:30.038 START TEST nvmf_target_disconnect 00:35:30.038 ************************************ 00:35:30.038 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:30.297 * Looking for test storage... 00:35:30.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.298 --rc genhtml_branch_coverage=1 00:35:30.298 --rc genhtml_function_coverage=1 00:35:30.298 --rc genhtml_legend=1 00:35:30.298 --rc geninfo_all_blocks=1 00:35:30.298 --rc geninfo_unexecuted_blocks=1 00:35:30.298 00:35:30.298 ' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.298 --rc genhtml_branch_coverage=1 00:35:30.298 --rc genhtml_function_coverage=1 00:35:30.298 --rc genhtml_legend=1 00:35:30.298 --rc geninfo_all_blocks=1 00:35:30.298 --rc geninfo_unexecuted_blocks=1 00:35:30.298 00:35:30.298 ' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.298 --rc genhtml_branch_coverage=1 00:35:30.298 --rc genhtml_function_coverage=1 00:35:30.298 --rc genhtml_legend=1 00:35:30.298 --rc geninfo_all_blocks=1 00:35:30.298 --rc geninfo_unexecuted_blocks=1 00:35:30.298 00:35:30.298 ' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.298 --rc genhtml_branch_coverage=1 00:35:30.298 --rc genhtml_function_coverage=1 00:35:30.298 --rc genhtml_legend=1 00:35:30.298 --rc geninfo_all_blocks=1 00:35:30.298 --rc geninfo_unexecuted_blocks=1 00:35:30.298 00:35:30.298 ' 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.298 13:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.298 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:30.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.299 13:46:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.200 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:32.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:32.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:32.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:32.201 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.459 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:32.459 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:32.460 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:35:32.460 00:35:32.460 --- 10.0.0.2 ping statistics --- 00:35:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.460 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:35:32.460 00:35:32.460 --- 10.0.0.1 ping statistics --- 00:35:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.460 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:32.460 ************************************ 00:35:32.460 START TEST nvmf_target_disconnect_tc1 00:35:32.460 ************************************ 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:32.460 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.719 [2024-10-14 13:46:24.335936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.719 [2024-10-14 13:46:24.336006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2223220 with addr=10.0.0.2, port=4420 00:35:32.719 [2024-10-14 13:46:24.336046] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:32.719 [2024-10-14 13:46:24.336075] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:32.719 [2024-10-14 13:46:24.336090] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:32.719 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:32.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:32.719 Initializing NVMe Controllers 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:32.719 00:35:32.719 real 0m0.095s 00:35:32.719 user 0m0.042s 00:35:32.719 sys 0m0.053s 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:32.719 ************************************ 00:35:32.719 END TEST nvmf_target_disconnect_tc1 00:35:32.719 ************************************ 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:32.719 ************************************ 00:35:32.719 START TEST nvmf_target_disconnect_tc2 00:35:32.719 ************************************ 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=400311 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 400311 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400311 ']' 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:32.719 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.719 [2024-10-14 13:46:24.453508] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:32.719 [2024-10-14 13:46:24.453601] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.719 [2024-10-14 13:46:24.520368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.719 [2024-10-14 13:46:24.567353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.719 [2024-10-14 13:46:24.567414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.719 [2024-10-14 13:46:24.567439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.719 [2024-10-14 13:46:24.567451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.719 [2024-10-14 13:46:24.567461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.719 [2024-10-14 13:46:24.568986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:32.719 [2024-10-14 13:46:24.569049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:32.719 [2024-10-14 13:46:24.569111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:32.719 [2024-10-14 13:46:24.569114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 Malloc0 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 [2024-10-14 13:46:24.754992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.977 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.977 [2024-10-14 13:46:24.783288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=400337 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:32.978 13:46:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:35.545 13:46:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 400311 00:35:35.545 13:46:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 [2024-10-14 13:46:26.810300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 [2024-10-14 13:46:26.810579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Read completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.545 Write completed with error (sct=0, sc=8) 00:35:35.545 starting I/O failed 00:35:35.546 Read completed with error (sct=0, sc=8) 00:35:35.546 starting I/O failed 00:35:35.546 Read completed with error (sct=0, sc=8) 00:35:35.546 starting I/O failed 00:35:35.546 [2024-10-14 13:46:26.810910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:35.546 [2024-10-14 13:46:26.811095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.811303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.811458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.811606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.811758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.811885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.811911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.812971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.812996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.813800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.813964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.814930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.814955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.815051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.815077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.815177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.815202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.815289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.546 [2024-10-14 13:46:26.815315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.546 qpair failed and we were unable to recover it. 00:35:35.546 [2024-10-14 13:46:26.815391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.815429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.815511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.815537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.815622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.815648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.815742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.815768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.815913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.815940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.816852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.816877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.817940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.817966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.818886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.818925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.819050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.819077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.819236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.819266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.819358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.819386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.819541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.819569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.547 [2024-10-14 13:46:26.819647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.547 [2024-10-14 13:46:26.819673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.547 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.819837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.819903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.819992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.820919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.820957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.821866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.821896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.822897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.822922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.548 qpair failed and we were unable to recover it. 00:35:35.548 [2024-10-14 13:46:26.823828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.548 [2024-10-14 13:46:26.823853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.823947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.823973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.824885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.824997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.825929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.825955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.826953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.826979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.549 [2024-10-14 13:46:26.827914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.549 [2024-10-14 13:46:26.827940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.549 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.828874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.828902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.829902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.829993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.830854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.830998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.831025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.831110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.831141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.831234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.550 [2024-10-14 13:46:26.831259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.550 qpair failed and we were unable to recover it. 00:35:35.550 [2024-10-14 13:46:26.831356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.831382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.831471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.831496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.831605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.831630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.831717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.831745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.831838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.831865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.832930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.832958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.833929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.833956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.834858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.834984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.835012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.835152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.835179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.835295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.551 [2024-10-14 13:46:26.835321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.551 qpair failed and we were unable to recover it. 00:35:35.551 [2024-10-14 13:46:26.835406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.835432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.835546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.835572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.835706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.835732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.835842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.835868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.835997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.836961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.836987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.837938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.837963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.838907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.838934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.839053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.839078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.839191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.839218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.839306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.839332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.839476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.552 [2024-10-14 13:46:26.839501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.552 qpair failed and we were unable to recover it. 00:35:35.552 [2024-10-14 13:46:26.839620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.839646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.839736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.839764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.839877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.839902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.840898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.840923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.841876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.841902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.842884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.842910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.843843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.843868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.844009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.844034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.844177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.844204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.844290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.844316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.844408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.553 [2024-10-14 13:46:26.844433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.553 qpair failed and we were unable to recover it. 00:35:35.553 [2024-10-14 13:46:26.844553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.844580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.844699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.844724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.844837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.844866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.844982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.845872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.845910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.846888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.846913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.847969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.847999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.848114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.848146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.848290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.848315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.848396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.848421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.848544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.554 [2024-10-14 13:46:26.848570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.554 qpair failed and we were unable to recover it. 00:35:35.554 [2024-10-14 13:46:26.848658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.848683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.848769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.848794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.848901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.848939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.849857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.849978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.850937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.850964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.851882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.851909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.852876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.852902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.555 qpair failed and we were unable to recover it. 00:35:35.555 [2024-10-14 13:46:26.853869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.555 [2024-10-14 13:46:26.853895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.854898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.854924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.855900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.855927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.856821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.856971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.857954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.857984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.858114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.858161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.556 qpair failed and we were unable to recover it. 00:35:35.556 [2024-10-14 13:46:26.858285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.556 [2024-10-14 13:46:26.858314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.858431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.858458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.858606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.858634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.858745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.858771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.858859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.858884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.858978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.859936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.859962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.860944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.860969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.861848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.861874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.862029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.862179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.862350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.862522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.557 [2024-10-14 13:46:26.862771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.557 qpair failed and we were unable to recover it. 00:35:35.557 [2024-10-14 13:46:26.862865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.862892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.862984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.863838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.863865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.864907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.864934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.865875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.865913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.558 [2024-10-14 13:46:26.866939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.558 [2024-10-14 13:46:26.866966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.558 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.867969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.867995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.868871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.868985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.869877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.869976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.870941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.870975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.871090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.559 [2024-10-14 13:46:26.871119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.559 qpair failed and we were unable to recover it. 00:35:35.559 [2024-10-14 13:46:26.871222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.871252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.871372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.871399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.871541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.871590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.871725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.871768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.871888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.871915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.872917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.872943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.873925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.873953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.874871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.874904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.875015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.875041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.875148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.875187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.875284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.560 [2024-10-14 13:46:26.875315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.560 qpair failed and we were unable to recover it. 00:35:35.560 [2024-10-14 13:46:26.875443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.875472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.875589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.875615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.875735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.875763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.875845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.875871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.875955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.875982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.876870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.876897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.877873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 [2024-10-14 13:46:26.877989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.561 [2024-10-14 13:46:26.878016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.561 qpair failed and we were unable to recover it. 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Read completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Write completed with error (sct=0, sc=8) 00:35:35.561 starting I/O failed 00:35:35.561 Write completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Write completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Write completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Write completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Write completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 Read completed with error (sct=0, sc=8) 00:35:35.562 starting I/O failed 00:35:35.562 [2024-10-14 13:46:26.878367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:35.562 [2024-10-14 13:46:26.878497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.878526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.878670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.878697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.878778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.878804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.878918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.878944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.879864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.879976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.880950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.880977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.881896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.881978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.882157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.562 [2024-10-14 13:46:26.882185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.562 qpair failed and we were unable to recover it. 00:35:35.562 [2024-10-14 13:46:26.882276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.882303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.882394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.882421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.882529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.882572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.882681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.882709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.882896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.882959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.883862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.883984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.884762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.884987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.885892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.885930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.563 [2024-10-14 13:46:26.886707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.563 [2024-10-14 13:46:26.886735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.563 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.886876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.886904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.887853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.887973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.888887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.888915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.889856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.889883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.890897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.890924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.891040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.891068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.891170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.564 [2024-10-14 13:46:26.891199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.564 qpair failed and we were unable to recover it. 00:35:35.564 [2024-10-14 13:46:26.891322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.891349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.891481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.891509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.891596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.891624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.891703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.891731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.891873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.891901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.892912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.892997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.893886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.893913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.894941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.894969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.895885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.895916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.896014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.896040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.896140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.896187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.565 [2024-10-14 13:46:26.896309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.565 [2024-10-14 13:46:26.896338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.565 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.896437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.896477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.896610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.896670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.896782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.896845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.896986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.897859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.897978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.898925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.898953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.899951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.899979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.900861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.900889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.901033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.901060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.901188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.901216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.901320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.901349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.901512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.901543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.566 [2024-10-14 13:46:26.901664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.566 [2024-10-14 13:46:26.901709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.566 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.901922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.901973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.902842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.902969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.903912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.903951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.904959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.904990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.905866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.567 [2024-10-14 13:46:26.905895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.567 qpair failed and we were unable to recover it. 00:35:35.567 [2024-10-14 13:46:26.906012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.906881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.906911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.907880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.907911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.908948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.908975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.909934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.909963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.910110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.910144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.910235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.910262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.910357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.910384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.568 qpair failed and we were unable to recover it. 00:35:35.568 [2024-10-14 13:46:26.910500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.568 [2024-10-14 13:46:26.910528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.910613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.910640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.910734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.910763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.910858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.910887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.911930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.911957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.912911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.912938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.913887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.913914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.569 [2024-10-14 13:46:26.914764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.569 [2024-10-14 13:46:26.914819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.569 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.914934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.914961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.915930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.915991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.916821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.916855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.917966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.917993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.918966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.918992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.919108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.919141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.919261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.919289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.570 qpair failed and we were unable to recover it. 00:35:35.570 [2024-10-14 13:46:26.919427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.570 [2024-10-14 13:46:26.919455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.919546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.919573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.919695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.919722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.919836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.919862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.919981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.920861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.920980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.921895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.921922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.922811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.922976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.923930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.923969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.924059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.924088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.924192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.924221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.924375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.571 [2024-10-14 13:46:26.924403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.571 qpair failed and we were unable to recover it. 00:35:35.571 [2024-10-14 13:46:26.924521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.924548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.924688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.924715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.924836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.924863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.924979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.925909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.925953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.926086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.926113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.926220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.926247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.926342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.926382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.926614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.926670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.926820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.926871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.927928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.927956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.928933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.928993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.929199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.929336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.929448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.929585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.929789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.929965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.572 [2024-10-14 13:46:26.930016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.572 qpair failed and we were unable to recover it. 00:35:35.572 [2024-10-14 13:46:26.930181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.930318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.930493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.930637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.930786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.930938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.930978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.931966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.931994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.932124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.932173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.932345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.932385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.932575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.932604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.932747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.932774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.932925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.932953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.933085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.933138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.933287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.933315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.933456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.933484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.933638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.933665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.933836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.933889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.934838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.934979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.935880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.935908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.936016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.936043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.936160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.936188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.936284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.936324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.936443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.936471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.573 [2024-10-14 13:46:26.936589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.573 [2024-10-14 13:46:26.936617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.573 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.936707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.936734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.936885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.936911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.937866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.937986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.938882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.938993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.939019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.939136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.939163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.939311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.939338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.939466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.939506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.939678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.939767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.939950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.940852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.940993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.941880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.941906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.574 qpair failed and we were unable to recover it. 00:35:35.574 [2024-10-14 13:46:26.942920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.574 [2024-10-14 13:46:26.942975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.943928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.943957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.944097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.944253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.944430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.944575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.944726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.944971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.945157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.945320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.945467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.945689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.945970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.945997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.946087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.946115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.946233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.946260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.946400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.946427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.946535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7260 is same with the state(6) to be set 00:35:35.575 [2024-10-14 13:46:26.946706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.946746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.946871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.946899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.947885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.947925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.948893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.948919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.949034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.949061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.949155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.949182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.949298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.949325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.575 qpair failed and we were unable to recover it. 00:35:35.575 [2024-10-14 13:46:26.949396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.575 [2024-10-14 13:46:26.949422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.949506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.949533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.949618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.949645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.949790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.949817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.949903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.949934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.950943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.950972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.951935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.951962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.952967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.952994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.953847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.953970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.954025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.954114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.576 [2024-10-14 13:46:26.954148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.576 qpair failed and we were unable to recover it. 00:35:35.576 [2024-10-14 13:46:26.954239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.954266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.954384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.954411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.954553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.954608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.954815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.954868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.954980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.955882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.955910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.956860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.956890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.957898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.957967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.958966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.958994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.959888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.959915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.960065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.960092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.960210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.960237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.960325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.960353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.577 qpair failed and we were unable to recover it. 00:35:35.577 [2024-10-14 13:46:26.960441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.577 [2024-10-14 13:46:26.960467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.960556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.960583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.960670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.960696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.960791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.960817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.960897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.960923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.961951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.961990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.962950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.962980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.963807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.963868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.964929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.964990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.965871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.965930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.966098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.966126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.966218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.966245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.578 [2024-10-14 13:46:26.966371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.578 [2024-10-14 13:46:26.966411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.578 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.966528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.966589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.966758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.966816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.966933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.966960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.967137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.967261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.967381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.967544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.967788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.967970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.968151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.968322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.968463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.968603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.968857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.968921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.969123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.969155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.969277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.969304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.969473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.969534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.969751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.969831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.970812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.970873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.971814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.971874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.972805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.972831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.973052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.973113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.973313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.973340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.973455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.973482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.973572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.579 [2024-10-14 13:46:26.973598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.579 qpair failed and we were unable to recover it. 00:35:35.579 [2024-10-14 13:46:26.973713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.973739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.973857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.973932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.974825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.974893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.975935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.975962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.976816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.976989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.977877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.977904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.978846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.978909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.979051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.979205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.979349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.979518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.580 [2024-10-14 13:46:26.979695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.580 qpair failed and we were unable to recover it. 00:35:35.580 [2024-10-14 13:46:26.979915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.979974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.980957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.980985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.981127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.981159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.981244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.981271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.981378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.981405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.981545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.981571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.981692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.981754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.982853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.982915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.983175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.983202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.983287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.983314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.983403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.983430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.983641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.983702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.983951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.984181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.984300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.984450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.984591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.984838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.984899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.985118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.985150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.985290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.985316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.985398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.985426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.985583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.985646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.985807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.985885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.986125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.986194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.986316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.986342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.986451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.986478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.986591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.986618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.581 [2024-10-14 13:46:26.986848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.581 [2024-10-14 13:46:26.986910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.581 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.987153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.987181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.987321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.987348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.987577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.987656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.987870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.987929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.988198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.988226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.988344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.988370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.988510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.988537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.988645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.988702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.988929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.988989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.989175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.989202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.989341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.989367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.989510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.989569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.989785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.989845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.990865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.990925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.991171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.991199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.991338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.991364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.991479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.991507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.991621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.991648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.991863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.991923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.992212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.992240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.992391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.992465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.992698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.992759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.992986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.993049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.993260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.993286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.993399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.993473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.993688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.993748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.993979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.994039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.994222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.994250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.994357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.994383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.994492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.994519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.994699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.994759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.995094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.995169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.995445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.995505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.995713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.995792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.995984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.582 [2024-10-14 13:46:26.996043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.582 qpair failed and we were unable to recover it. 00:35:35.582 [2024-10-14 13:46:26.996235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.996296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.996562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.996621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.996871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.996931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.997191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.997271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.997493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.997571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.997767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.997829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.998022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.998083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.998379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.998456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.998728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.998806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.999074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.999149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.999427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.999504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:26.999800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:26.999878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.000152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.000213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.000449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.000536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.000800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.000880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.001152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.001214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.001438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.001523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.001833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.001912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.002173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.002235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.002438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.002499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.002762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.002839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.003079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.003158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.003436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.003497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.003751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.003829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.004112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.004189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.004421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.004483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.004784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.004861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.005158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.005220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.005444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.005505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.005817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.005896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.006172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.006235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.006503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.006562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.006837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.006898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.007089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.007164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.007388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.007449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.007708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.007787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.008028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.008090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.008398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.008483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.008789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.008867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.009107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.009181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.583 qpair failed and we were unable to recover it. 00:35:35.583 [2024-10-14 13:46:27.009493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.583 [2024-10-14 13:46:27.009572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.009868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.009946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.010217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.010279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.010591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.010670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.010981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.011058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.011324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.011405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.011716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.011794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.012061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.012121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.012402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.012481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.012775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.012853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.013122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.013193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.013460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.013521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.013823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.013900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.014165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.014242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.014498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.014578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.014870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.014950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.015223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.015285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.015540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.015619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.015923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.016000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.016236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.016297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.016596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.016674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.016889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.016950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.017206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.017288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.017599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.017676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.017952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.018011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.018272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.018350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.018657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.018735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.019024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.019083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.019365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.019444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.019702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.019780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.020003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.020063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.020373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.020453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.020749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.020828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.021068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.021142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.021423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.021484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.021778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.021855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.022051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.022111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.022449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.022527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.022824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.022901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.023172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.023234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.584 [2024-10-14 13:46:27.023509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.584 [2024-10-14 13:46:27.023587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.584 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.023847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.023926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.024159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.024222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.024518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.024595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.024843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.024920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.025216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.025294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.025574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.025635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.025888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.025966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.026252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.026331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.026630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.026707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.026974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.027034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.027270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.027348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.027600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.027679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.027946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.028017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.028232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.028313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.028578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.028656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.028930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.028990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.029252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.029330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.029599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.029677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.029956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.030016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.030266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.030346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.030601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.030679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.030946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.031006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.031290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.031371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.031628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.031707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.031926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.031988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.032276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.032355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.032616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.032694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.032969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.033030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.033301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.033382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.033643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.033720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.033951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.034011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.034310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.034390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.034698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.034776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.034983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.035043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.035330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.585 [2024-10-14 13:46:27.035410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.585 qpair failed and we were unable to recover it. 00:35:35.585 [2024-10-14 13:46:27.035658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.035736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.036005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.036065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.036299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.036378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.036650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.036728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.036978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.037039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.037360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.037438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.037690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.037767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.038043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.038103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.038447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.038525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.038822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.038900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.039145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.039206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.039460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.039538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.039787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.039866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.040152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.040213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.040463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.040523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.040819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.040897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.041092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.041167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.041425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.041512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.041814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.041891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.042163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.042225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.042525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.042602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.042900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.042977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.043244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.043306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.043564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.043642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.043937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.044014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.044319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.044398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.044650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.044730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.044954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.045015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.045300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.045380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.045673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.045752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.045974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.046034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.046379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.046459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.046707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.046786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.047052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.047111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.047351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.047432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.047729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.047807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.048052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.048112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.048389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.048466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.048726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.048804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.586 qpair failed and we were unable to recover it. 00:35:35.586 [2024-10-14 13:46:27.049065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.586 [2024-10-14 13:46:27.049125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.049412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.049474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.049767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.049845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.050113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.050185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.050442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.050521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.050781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.050861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.051117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.051190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.051409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.051487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.051788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.051865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.052142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.052204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.052484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.052545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.052801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.052879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.053057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.053117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.053404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.053465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.053771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.053847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.054165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.054226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.054496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.054558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.054811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.054889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.055122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.055206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.055451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.055512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.055822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.055902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.056184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.056246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.056550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.056628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.056946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.057024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.057301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.057363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.057613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.057691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.057994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.058071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.058383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.058463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.058722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.058801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.059028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.059087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.059402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.059480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.059795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.059874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.060182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.060244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.060430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.060493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.060755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.060834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.061102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.061176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.061401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.061480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.061742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.061821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.062051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.062113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.062389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.062472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.062739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.062814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.062991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.063037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.063197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.063245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.587 qpair failed and we were unable to recover it. 00:35:35.587 [2024-10-14 13:46:27.063393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.587 [2024-10-14 13:46:27.063441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.063624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.063671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.063896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.063956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.064181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.064243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.064494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.064572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.064808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.064889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.065081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.065155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.065420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.065480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.065755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.065815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.065996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.066057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.066295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.066376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.066628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.066706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.066974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.067035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.067289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.067368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.067615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.067693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.067925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.067996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.068266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.068344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.068590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.068669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.068941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.069002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.069255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.069334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.069583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.069662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.069908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.069968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.070174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.070236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.070537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.070616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.070906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.070984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.071272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.071350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.071608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.071686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.071931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.071991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.072254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.072333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.072594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.072671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.072900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.072961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.073249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.073327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.073539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.073617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.073836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.073897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.074161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.074221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.074464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.074524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.074775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.074855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.075082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.075167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.075427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.075507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.075804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.075883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.076066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.076144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.076376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.076460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.076720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.076801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.076997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.588 [2024-10-14 13:46:27.077058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.588 qpair failed and we were unable to recover it. 00:35:35.588 [2024-10-14 13:46:27.077266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.077329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.077596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.077676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.077880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.077941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.078147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.078207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.078412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.078473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.078700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.078761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.078991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.079053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.079345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.079406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.079671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.079732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.080007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.080067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.080353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.080432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.080646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.080740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.080973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.081033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.081276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.081356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.081650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.081729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.082009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.082069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.082350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.082429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.082689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.082751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.082956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.083017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.083250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.083330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.083545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.083625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.083855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.083918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.084107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.084182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.084372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.084434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.084651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.084711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.084909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.084972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.085206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.085267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.085473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.085533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.085774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.085834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.086074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.086147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.086427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.086487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.086785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.086864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.087102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.087182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.087425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.087503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.087768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.087846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.088018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.088078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.088318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.088398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.088713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.088791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.589 [2024-10-14 13:46:27.089088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.589 [2024-10-14 13:46:27.089300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.589 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.089542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.089599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.089865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.089931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.090162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.090217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.090389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.090469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.090720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.090783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.091086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.091148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.091330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.091383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.091551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.091603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.091834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.091907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.092080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.092148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.092360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.092412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.092596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.092661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.092899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.092963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.093244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.093298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.093547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.093615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.093875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.093927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.094081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.094148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.094364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.094418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.094683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.094750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.095015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.095082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.095307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.095360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.095560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.095625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.095861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.095926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.096169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.096224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.096464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.096517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.096766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.096819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.097033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.097095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.097329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.097381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.097618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.097684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.097925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.097991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.098225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.098279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.098443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.098524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.098781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.098847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.099069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.099123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.099302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.099355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.099658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.099724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.100046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.100111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.100314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.100366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.100658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.100723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.101010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.101075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.101331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.101384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.101678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.101743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.102008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.102074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.102340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.102394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.102615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.102681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.102866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.102930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.103126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.103191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.590 qpair failed and we were unable to recover it. 00:35:35.590 [2024-10-14 13:46:27.103406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.590 [2024-10-14 13:46:27.103459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.103687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.103739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.104037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.104102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.104350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.104404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.104775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.104840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.105063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.105149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.105441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.105515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.105737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.105802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.106009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.106074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.106341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.106406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.106686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.106751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.106985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.107050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.107275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.107341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.107597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.107657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.107840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.107900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.108158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.108219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.108418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.108479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.108749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.108809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.109091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.109169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.109438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.109498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.109779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.109839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.110084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.110204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.110419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.110498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.110777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.110841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.111081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.111156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.111398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.111459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.111725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.111784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.111974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.112033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.112293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.112354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.112582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.112642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.112941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.113005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.113237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.113300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.113527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.113587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.113913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.113972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.114238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.114300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.114537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.114599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.114828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.114893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.115149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.115230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.115462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.115522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.115764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.115824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.116121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.116192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.116381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.116442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.116687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.116746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.117010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.117071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.117301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.117361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.117570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.117630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.117905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.117973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.591 [2024-10-14 13:46:27.118232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.591 [2024-10-14 13:46:27.118332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.591 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.118580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.118645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.118875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.118941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.119196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.119261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.119549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.119612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.119906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.119969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.120267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.120333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.120593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.120653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.120926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.120986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.121187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.121249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.121496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.121555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.121815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.121881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.122110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.122189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.122488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.122553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.122776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.122841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.123113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.123211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.123455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.123515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.123746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.123806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.124082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.124158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.124391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.124452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.124697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.124775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.125010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.125075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.125407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.125498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.126349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.126420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.126678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.126741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.127007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.127069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.127323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.127386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.127568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.127640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.127919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.127980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.128219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.128280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.128524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.128584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.128824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.128902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.130359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.130424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.130721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.130797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.131027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.131083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.131280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.131338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.131628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.131710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.131911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.131986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.132235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.132310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.132593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.132667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.132890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.132947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.133227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.133286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.133508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.133564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.133825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.133882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.134068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.134123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.134384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.134466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.134754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.134829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.135048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.592 [2024-10-14 13:46:27.135103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.592 qpair failed and we were unable to recover it. 00:35:35.592 [2024-10-14 13:46:27.135384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.135470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.135755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.135830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.136029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.136084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.136342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.136416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.136708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.136783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.137000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.137058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.137337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.137412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.137641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.137696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.137905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.137961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.138220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.138278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.138493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.138548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.138794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.138851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.139067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.139123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.139353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.139409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.139653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.139709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.139930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.139988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.140223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.140280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.140467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.140525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.140737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.140793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.141013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.141079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.141264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.141322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.141541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.141598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.141758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.141813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.141982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.142039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.142291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.142367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.142607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.142681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.142878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.142934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.143105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.143174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.143387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.143469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.143726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.143782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.144033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.144089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.144357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.144433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.144641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.144698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.144967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.145023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.145217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.145295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.145525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.145601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.145773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.145830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.146077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.146148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.146392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.146471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.146691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.146765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.593 [2024-10-14 13:46:27.147016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.593 [2024-10-14 13:46:27.147074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.593 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.147424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.147522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.147809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.147880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.148177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.148238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.148442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.148510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.148805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.148871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.149152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.149212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.149396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.149477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.149732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.149800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.150023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.150079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.150265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.150322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.150552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.150617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.150916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.150982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.151197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.151254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.151516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.151584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.151915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.151983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.152251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.152308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.152519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.152585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.152901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.152968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.153259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.153326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.153629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.153695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.153989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.154055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.154278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.154336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.154571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.154637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.154983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.155047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.155297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.155354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.155569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.155626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.155842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.155907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.156165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.594 [2024-10-14 13:46:27.156223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.594 qpair failed and we were unable to recover it. 00:35:35.594 [2024-10-14 13:46:27.156474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.156543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.156867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.156932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.157199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.157257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.157464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.157529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.157851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.157915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.158190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.158248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.158485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.158551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.158821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.158886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.159116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.159210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.159486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.159553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.159821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.159888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.160144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.160220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.160396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.160477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.160739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.160795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.161082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.161185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.161400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.161456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.161711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.161767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.161994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.162062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.162374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.162452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.162725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.162781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.163032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.163097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.163344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.163400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.163625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.163682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.163943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.164008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.164239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.164296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.164521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.164577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.164801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.164870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.165157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.165230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.165403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.165459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.165756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.165821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.166102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.166209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.166389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.166446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.166709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.166774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.167018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.167084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.167397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.167465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.167720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.167786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.168040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.168106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.168383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.168449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.168711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.168775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.595 [2024-10-14 13:46:27.169065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.595 [2024-10-14 13:46:27.169149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.595 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.169397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.169463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.169714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.169780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.170047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.170113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.170349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.170415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.170721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.170785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.171050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.171116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.171416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.171483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.171775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.171840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.172037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.172101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.172347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.172416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.172674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.172742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.173033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.173098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.173375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.173441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.173643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.173710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.173927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.173993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.174246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.174313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.174601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.174668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.174917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.174995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.175274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.175342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.175601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.175666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.175888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.175952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.176171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.176238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.176532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.176598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.176813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.176878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.177084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.177164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.177417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.177482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.177773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.177838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.178152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.178219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.178485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.178552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.178750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.178815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.179055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.179121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.179429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.179497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.179754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.179819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.180101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.180189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.180443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.180511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.180770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.180836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.181030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.181096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.181370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.181438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.181750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.181815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.596 [2024-10-14 13:46:27.182105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.596 [2024-10-14 13:46:27.182191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.596 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.182416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.182481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.182772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.182838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.183087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.183181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.183435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.183500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.183762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.183828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.184119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.184205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.184455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.184520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.184722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.184787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.184986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.185051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.185329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.185394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.185648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.185714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.185929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.185994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.186287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.186354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.186578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.186645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.186934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.187000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.187285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.187350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.187610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.187676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.187937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.188012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.188322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.188389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.188612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.188679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.188968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.189034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.189348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.189414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.189618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.189685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.189950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.190016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.190260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.190327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.190581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.190648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.190855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.190921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.191186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.191254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.191472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.191538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.191804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.191869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.192114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.192194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.192466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.192532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.192788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.597 [2024-10-14 13:46:27.192855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.597 qpair failed and we were unable to recover it. 00:35:35.597 [2024-10-14 13:46:27.193103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.193184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.193433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.193498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.193720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.193786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.194035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.194101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.194364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.194428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.194636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.194702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.194928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.194993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.195239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.195306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.195588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.195654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.195875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.195940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.196207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.196274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.196587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.196653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.196868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.196932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.197168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.197237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.197489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.197554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.197854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.197919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.198192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.198259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.198497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.198562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.198848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.198912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.199210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.199277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.199531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.199597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.199849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.199915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.200167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.200236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.200487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.200552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.200845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.200920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.201182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.201251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.201542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.201608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.201870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.201935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.202232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.202298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.202603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.202668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.202953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.203018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.203259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.203326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.203583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.203650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.203901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.203966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.204171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.204238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.204459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.204525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.204776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.204844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.205157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.205223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.205537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.205603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.205853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.205920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.598 qpair failed and we were unable to recover it. 00:35:35.598 [2024-10-14 13:46:27.206155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.598 [2024-10-14 13:46:27.206221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.206481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.206546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.206751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.206819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.207058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.207123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.207418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.207483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.207741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.207806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.208011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.208076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.208307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.208372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.208577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.208642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.208939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.209004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.209293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.209360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.209632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.209698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.209946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.210012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.210225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.210293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.210581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.210646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.210880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.210947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.211207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.211273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.211521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.211586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.211846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.211912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.212175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.212241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.212496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.212563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.212861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.212927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.213163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.213231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.213425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.213494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.213780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.213858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.214117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.214199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.214454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.214521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.214747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.214814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.215075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.215174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.215469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.215535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.215848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.215914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.216145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.216213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.216512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.216577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.216834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.216899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.217115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.217200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.217492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.217557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.217850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.217916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.218205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.218270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.218577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.218643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.218894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.218960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.599 [2024-10-14 13:46:27.219211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.599 [2024-10-14 13:46:27.219277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.599 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.219536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.219602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.219855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.219921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.220119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.220197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.220456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.220522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.220777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.220841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.221111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.221191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.221473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.221538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.221761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.221828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.222077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.222162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.222469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.222534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.222794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.222862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.223126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.223230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.223516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.223582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.223876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.223940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.224166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.224233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.224436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.224502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.224804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.224869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.225180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.225247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.225548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.225612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.225831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.225896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.226149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.226216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.226500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.226566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.226781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.226848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.227067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.227184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.227503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.227570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.227820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.227885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.228093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.228179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.228434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.228498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.228740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.228807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.229070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.229157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.229462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.229527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.229775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.229842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.230083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.230167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.230422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.230488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.230732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.230797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.231086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.231188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.231485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.231550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.231812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.231881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.232099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.232186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.232398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.600 [2024-10-14 13:46:27.232464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.600 qpair failed and we were unable to recover it. 00:35:35.600 [2024-10-14 13:46:27.232726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.232792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.233082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.233164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.233432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.233498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.233754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.233820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.234102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.234186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.234439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.234503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.234748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.234813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.235078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.235162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.235406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.235472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.235777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.235842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.236072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.236158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.236373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.236438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.236699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.236764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.237016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.237084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.237333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.237399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.237606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.237672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.237966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.238031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.238296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.238363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.238620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.238686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.238952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.239017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.239299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.239368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.239666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.239731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.239955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.240021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.240314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.240391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.240639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.240705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.240977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.241043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.241316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.241383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.241629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.241695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.241951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.242016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.242274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.242341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.242601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.242668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.242954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.243020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.243284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.243351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.243622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.243688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.243990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.244055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.244269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.244334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.244587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.244652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.244876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.244944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.245168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.245235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.245522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.601 [2024-10-14 13:46:27.245588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.601 qpair failed and we were unable to recover it. 00:35:35.601 [2024-10-14 13:46:27.245894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.245960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.246213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.246279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.246533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.246599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.246903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.246968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.247188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.247256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.247514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.247579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.247836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.247901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.248160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.248228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.248426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.248491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.248784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.248849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.249110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.249198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.249415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.249480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.249781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.249846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.250059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.250125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.250397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.250464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.250755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.250821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.251070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.251172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.251428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.251495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.251707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.251774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.252038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.252103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.252379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.252446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.252714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.252780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.253036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.253100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.253351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.253427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.253642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.253709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.253898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.253964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.254221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.254288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.254531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.254599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.254817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.254882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.255140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.255207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.255471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.255537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.255829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.255894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.256143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.256211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.256509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.256573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.256870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.256935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.602 qpair failed and we were unable to recover it. 00:35:35.602 [2024-10-14 13:46:27.257192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.602 [2024-10-14 13:46:27.257260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.257550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.257615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.257871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.257936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.258199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.258266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.258472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.258546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.258773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.258839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.259099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.259198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.259454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.259521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.259783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.259848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.260065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.260147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.260364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.260429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.260728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.260793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.261018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.261083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.261317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.261383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.261679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.261744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.262005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.262072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.262350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.262415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.262671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.262739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.262984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.263051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.263318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.263384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.263633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.263698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.263997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.264062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.264318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.264384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.264677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.264742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.264959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.265024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.265286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.265352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.265568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.265635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.265883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.265950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.266195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.266272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.266505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.266571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.266826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.266891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.267149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.267217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.267441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.267507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.267755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.267820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.268070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.268148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.268409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.268473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.268722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.268788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.269042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.269107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.269356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.269424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.269632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.269699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.603 [2024-10-14 13:46:27.269991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.603 [2024-10-14 13:46:27.270057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.603 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.270316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.270383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.270603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.270670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.270927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.270993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.271210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.271277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.271544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.271609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.271873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.271939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.272192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.272259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.272470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.272534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.272776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.272841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.273081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.273158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.273448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.273512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.273814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.273879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.274149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.274216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.274451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.274515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.274720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.274787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.275004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.275069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.275356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.275422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.275707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.275772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.276065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.276149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.276417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.276481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.276775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.276840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.277110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.277195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.277404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.277470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.277733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.277798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.278064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.278160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.278378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.278444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.278709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.278775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.279029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.279103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.279336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.279401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.279666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.279731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.279982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.280048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.280316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.280385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.280637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.280705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.280972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.281036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.281312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.281379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.281623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.281690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.281953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.282018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.282300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.282366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.282621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.282686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.282898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.604 [2024-10-14 13:46:27.282965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.604 qpair failed and we were unable to recover it. 00:35:35.604 [2024-10-14 13:46:27.283200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.283268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.283485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.283551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.283763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.283832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.284079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.284158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.284416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.284482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.284699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.284764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.284986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.285051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.285255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.285321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.285585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.285651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.285863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.285928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.286177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.286244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.286499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.286565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.286783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.286850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.287089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.287175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.287457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.287525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.287780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.287846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.288093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.288177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.288392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.288457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.288674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.288741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.289007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.289072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.289356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.289424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.289679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.289744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.290000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.290065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.290328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.290396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.290685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.290750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.291000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.291065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.291348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.291414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.291709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.291784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.292070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.292153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.292451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.292516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.292723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.292789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.293022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.293087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.293312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.293379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.293625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.293691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.293940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.294007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.294289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.294357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.294614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.294680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.294921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.294986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.295298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.295366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.295631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.605 [2024-10-14 13:46:27.295696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.605 qpair failed and we were unable to recover it. 00:35:35.605 [2024-10-14 13:46:27.295941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.296006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.296320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.296388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.296638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.296705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.297001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.297066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.297334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.297400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.297647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.297713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.297999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.298064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.298383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.298449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.298698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.298764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.298963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.299031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.299274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.299341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.299595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.299662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.299927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.299991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.300244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.300312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.300612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.300679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.300950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.301015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.301248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.301317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.301527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.301592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.301841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.301908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.302202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.302270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.302538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.302604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.302819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.302883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.303157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.303223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.303466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.303532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.303831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.303895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.304151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.304218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.304432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.304500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.304740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.304817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.305108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.305194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.305421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.305487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.305740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.305804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.306069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.306153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.306369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.306436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.306685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.306750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.307001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.307065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.606 qpair failed and we were unable to recover it. 00:35:35.606 [2024-10-14 13:46:27.307380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.606 [2024-10-14 13:46:27.307446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.307737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.307802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.308055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.308121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.308434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.308499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.308761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.308827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.309079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.309164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.309404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.309472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.309682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.309747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.310001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.310066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.310351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.310416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.310664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.310729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.310925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.310991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.311264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.311330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.311622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.311687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.311970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.312036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.312303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.312369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.312618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.312683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.312884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.312949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.313161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.313227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.313484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.313552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.313854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.313920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.314196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.314261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.314557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.314622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.314928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.315231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.315297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.315582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.315647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.315913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.315978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.316259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.316324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.316572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.316640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.316864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.316930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.317195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.317261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.317524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.317589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.317837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.317912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.318112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.318192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.318442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.318507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.318797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.318861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.319074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.319173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.319429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.319495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.319760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.319825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.320073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.320160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.607 [2024-10-14 13:46:27.320432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.607 [2024-10-14 13:46:27.320497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.607 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.320715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.320783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.321088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.321174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.321473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.321536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.321752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.321817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.322025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.322089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.322376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.322444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.322697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.322764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.322979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.323044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.323281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.323348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.323565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.323629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.323827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.323894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.324178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.324244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.324448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.324513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.324736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.324802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.325092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.325171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.325371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.325437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.325637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.325702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.325998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.326064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.326351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.326418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.326676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.326741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.326996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.327061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.327297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.327365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.327622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.327687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.327978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.328043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.328293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.328360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.328610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.328675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.328961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.329027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.329287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.329355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.329610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.329675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.329931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.329996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.330257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.330324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.330625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.330700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.331006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.331071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.331336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.331406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.331619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.331685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.331943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.332011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.332254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.332320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.332572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.332637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.332860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.332926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.333186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.608 [2024-10-14 13:46:27.333252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.608 qpair failed and we were unable to recover it. 00:35:35.608 [2024-10-14 13:46:27.333512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.333577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.333867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.333933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.334185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.334251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.334476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.334541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.334795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.334860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.335178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.335246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.335532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.335598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.335840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.335905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.336157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.336224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.336471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.336536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.336796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.336862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.337086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.337168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.337470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.337535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.337830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.337894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.338195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.338262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.338503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.338567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.338781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.338846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.339147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.339214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.339503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.339577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.339872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.339938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.340241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.340308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.340549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.340614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.340884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.340950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.341219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.341286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.341572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.341639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.341893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.341961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.342252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.342321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.342572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.342637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.342887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.342953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.343194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.343261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.343502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.343568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.343810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.343876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.344158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.344225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.344441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.344507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.344753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.344818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.345070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.345148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.345348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.345414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.345657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.345722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.345958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.346024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.609 [2024-10-14 13:46:27.346295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.609 [2024-10-14 13:46:27.346362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.609 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.346645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.346710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.346955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.347021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.347286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.347352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.347606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.347673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.347934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.347999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.348251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.348318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.348614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.348679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.348946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.349011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.349276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.349343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.349601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.349668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.349919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.349985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.350234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.350302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.350544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.350610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.350864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.350929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.351197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.351267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.351518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.351584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.351849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.351914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.352167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.352234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.352442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.352517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.352773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.352838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.353087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.353170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.353395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.353461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.353671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.353738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.353939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.354007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.354255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.354324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.354582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.354647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.354898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.354963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.355248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.355316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.355569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.355635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.355879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.355944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.356211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.356277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.356535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.356600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.356830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.356894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.357167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.357233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.357434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.610 [2024-10-14 13:46:27.357499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.610 qpair failed and we were unable to recover it. 00:35:35.610 [2024-10-14 13:46:27.357753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.357819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.358062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.358147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.358434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.358502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.358746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.358811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.359057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.359124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.359384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.359450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.359708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.359774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.360039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.360105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.360449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.360515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.360774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.360844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.361067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.361152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.361381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.361448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.361709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.361774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.362013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.362078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.362340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.362406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.362621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.362689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.362926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.362992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.363243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.363311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.611 [2024-10-14 13:46:27.363583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.611 [2024-10-14 13:46:27.363648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.611 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.363887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.363953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.364178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.364248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.364520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.364585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.364840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.364905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.365180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.365257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.365473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.365538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.365789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.365856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.366072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.366154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.366382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.366446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.366698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.366765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.366991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.367057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.367344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.367411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.367695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.367762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.367965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.368030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.368331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.368397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.368643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.368708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.368914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.368980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.369200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.369269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.369542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.369609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.369826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.369892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.370182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.370248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.370460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.896 [2024-10-14 13:46:27.370526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.896 qpair failed and we were unable to recover it. 00:35:35.896 [2024-10-14 13:46:27.370727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.370791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.371038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.371105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.371366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.371432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.371719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.371785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.372042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.372108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.372345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.372411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.372676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.372741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.372953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.373018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.373313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.373381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.373660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.373725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.373989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.374055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.374292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.374358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.374610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.374676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.374967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.375033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.375273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.375340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.375597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.375662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.375933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.375999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.376253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.376321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.376544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.376609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.376870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.376937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.377231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.377299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.377520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.377585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.377842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.377919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.378158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.378225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.378513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.378578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.378797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.378865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.379089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.379169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.379415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.379481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.379734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.379801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.380009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.380074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.380306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.380373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.380666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.380732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.381020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.381085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.381362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.381429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.381628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.381694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.381987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.382053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.382310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.382377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.382672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.382738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.383035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.897 [2024-10-14 13:46:27.383100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.897 qpair failed and we were unable to recover it. 00:35:35.897 [2024-10-14 13:46:27.383334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.383402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.383656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.383722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.383947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.384016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.384318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.384416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.384702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.384776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.385001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.385069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.385327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.385396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.385658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.385724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.386009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.386075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.386283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.386350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.386616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.386681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.386894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.386960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.387223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.387290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.387528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.387593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.387824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.387889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.388159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.388227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.388516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.388581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.388779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.388847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.389077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.389175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.389441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.389506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.389793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.389859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.390162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.390230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.390476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.390542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.390822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.390898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.391160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.391228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.391432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.391498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.391755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.391820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.392039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.392106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.392338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.392404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.392628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.392693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.392981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.393046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.393371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.393438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.393646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.393713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.393968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.394033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.394328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.394397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.394687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.394753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.898 [2024-10-14 13:46:27.394999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.898 [2024-10-14 13:46:27.395064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.898 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.395307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.395374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.395669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.395734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.395933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.395998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.396223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.396289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.396487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.396553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.396765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.396832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.397052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.397120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.397358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.397424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.397643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.397709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.397970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.398035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.398335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.398401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.398645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.398710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.398941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.399007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.399280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.399347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.399606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.399672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.399940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.400006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.400281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.400348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.400648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.400713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.401004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.401069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.401312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.401379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.401574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.401641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.401883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.401950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.402179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.402247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.402539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.402605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.402806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.402871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.403156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.403222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.403474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.403549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.403804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.403869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.404167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.404236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.404490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.404555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.404855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.404921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.405181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.899 [2024-10-14 13:46:27.405249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.899 qpair failed and we were unable to recover it. 00:35:35.899 [2024-10-14 13:46:27.405510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.405575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.405778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.405851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.406042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.406107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.406319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.406385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.406675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.406740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.407039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.407105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.407366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.407432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.407674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.407740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.408015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.408082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.408349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.408415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.408711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.408776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.409031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.409096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.409376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.409442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.409656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.409722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.409959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.410024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.410350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.410418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.410718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.410783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.411025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.411091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.411402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.411467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.411760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.411825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.412073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.412157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.412460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.412525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.412835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.412901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.413110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.413210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.413502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.413567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.413878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.413944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.414202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.414268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.414502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.414568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.414810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.414878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.415176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.415244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.415537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.415603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.415858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.415925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.416146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.416215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.416458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.416525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.416714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.416790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.417034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.417099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.417367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.417432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.417679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.417745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.900 [2024-10-14 13:46:27.418036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.900 [2024-10-14 13:46:27.418101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.900 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.418361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.418426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.418718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.418784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.419008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.419072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.419347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.419414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.419676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.419742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.419993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.420059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.420360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.420427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.420627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.420693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.420937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.421003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.421215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.421283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.421543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.421607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.421851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.421918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.422179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.422247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.422544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.422610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.422817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.422883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.423214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.423281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.423481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.423547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.423755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.423821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.424063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.424147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.424361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.424426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.424676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.424744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.425033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.425100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.425342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.425408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.425691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.425757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.425998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.426064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.426305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.426376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.426640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.426707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.427014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.427079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.427401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.427468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.427716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.427786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.428051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.428117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.901 [2024-10-14 13:46:27.428386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.901 [2024-10-14 13:46:27.428453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.901 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.428704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.428769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.429025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.429090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.429383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.429452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.429690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.429768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.430030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.430095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.430377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.430443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.430741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.430808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.431027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.431093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.431398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.431464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.431766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.431833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.432075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.432158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.432386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.432452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.432708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.432779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.432996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.433064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.433308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.433377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.433600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.433669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.433972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.434040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.434373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.434441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.434735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.434801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.435097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.435197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.435341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.435376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.435516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.435547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.435704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.435737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.435896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.435927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.436144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.436285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.436480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.436651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.436831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.436968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.437116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.437304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.437459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.437661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.437863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.902 [2024-10-14 13:46:27.437895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.902 qpair failed and we were unable to recover it. 00:35:35.902 [2024-10-14 13:46:27.438026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.438223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.438369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.438535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.438703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.438888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.438922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.439090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.439124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.439274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.439309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.439480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.439514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.439690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.439724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.439896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.439931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.440145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.440299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.440474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.440680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.440829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.440974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.441189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.441323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.441494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.441679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.441849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.441884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.442883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.442917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.903 qpair failed and we were unable to recover it. 00:35:35.903 [2024-10-14 13:46:27.443962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.903 [2024-10-14 13:46:27.443995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.444181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.444339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.444492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.444668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.444846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.444988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.445943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.445976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.446171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.446223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.446338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.446376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.446490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.446526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.446697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.446732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.446903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.446938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.447883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.447917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.448089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.448122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.448278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.448312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.448486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.448520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.448663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.448697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.448809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.448843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.449878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.449919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.450097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.450140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.450288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.450323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.450501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.450534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.450649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.450684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.450830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.450864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.904 [2024-10-14 13:46:27.451006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.904 [2024-10-14 13:46:27.451040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.904 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.451213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.451247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.451386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.451419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.451558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.451598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.451744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.451778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.451918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.451952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.452136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.452171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.452290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.452323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.452477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.452511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.452626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.452660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.452833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.452866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.453937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.453971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.454155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.454361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.454516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.454666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.454829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.454978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.455150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.455360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.455574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.455710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.455863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.455897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.456045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.456079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.456228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.456261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.456408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.456441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.905 [2024-10-14 13:46:27.456622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.905 [2024-10-14 13:46:27.456656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.905 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.456769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.456803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.456956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.456990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.457143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.457177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.457298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.457331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.457483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.457517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.457652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.457686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.457861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.457895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.458895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.458934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.459949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.459982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.460950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.460983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.461158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.461348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.461568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.461706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.461885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.461997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.462173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.462393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.462546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.462720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.906 [2024-10-14 13:46:27.462896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.906 [2024-10-14 13:46:27.462929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.906 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.463147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.463294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.463482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.463616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.463825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.463997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.464175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.464317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.464499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.464702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.464863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.464897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.465857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.465890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.466880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.466914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.467139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.467307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.467512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.467650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.467826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.467973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.468014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.468168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.468211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.468372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.468430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.468580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.468615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.468770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.907 [2024-10-14 13:46:27.468805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.907 qpair failed and we were unable to recover it. 00:35:35.907 [2024-10-14 13:46:27.468943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.468979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.469163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.469378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.469556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.469687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.469892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.469996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.470198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.470373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.470580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.470768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.470941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.470989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.471179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.471216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.471394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.471428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.471583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.471619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.471793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.471828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.471962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.471997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.472121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.472166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.472280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.472315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.472433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.472467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.472614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.472649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.472823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.472859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.473836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.473870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.474934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.474968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.475082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.475115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.908 [2024-10-14 13:46:27.475286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.908 [2024-10-14 13:46:27.475321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.908 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.475469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.475506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.475612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.475648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.475765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.475807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.475945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.475980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.476120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.476164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.476317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.476352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.476492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.476527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.476698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.476733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.476873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.476908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.477054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.477090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.477275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.477311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.477496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.477531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.477673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.477709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.477840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.477876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.478845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.478879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.479904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.479938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.480115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.480158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.480297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.480333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.480435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.480470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.480609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.480644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.480824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.480860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.481893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.481928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.909 [2024-10-14 13:46:27.482031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.909 [2024-10-14 13:46:27.482066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.909 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.482276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.482312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.482464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.482499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.482643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.482677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.482793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.482828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.483941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.483977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.484123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.484166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.484268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.484303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.484483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.484518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.484693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.484728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.484874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.484909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.485957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.485993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.486143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.486179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.486315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.486350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.486524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.486559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.486682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.486717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.486894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.486928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.487042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.487079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.487243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.487279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.487464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.487499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.487635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.487670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.487816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.487851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.488003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.488038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.488185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.488221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.488373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.910 [2024-10-14 13:46:27.488408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.910 qpair failed and we were unable to recover it. 00:35:35.910 [2024-10-14 13:46:27.488518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.488553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.488662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.488698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.488873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.488908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.489858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.489894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.490037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.490072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.490224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.490260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.490392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.490427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.490626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.490679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.490869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.490907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.491074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.491274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.491459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.491641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.491831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.491978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.492162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.492350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.492506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.492692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.492883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.492921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.493070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.493106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.493286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.493321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.493473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.493509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.493684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.493719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.493870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.493905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.494938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.494973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.495149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.495184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.495356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.495392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.495539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.911 [2024-10-14 13:46:27.495575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.911 qpair failed and we were unable to recover it. 00:35:35.911 [2024-10-14 13:46:27.495724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.495760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.495901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.495936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.496076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.496111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.496277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.496312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.496486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.496521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.496638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.496674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.496821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.496856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.497885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.497920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.498925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.498961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.499143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.499179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.499360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.499396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.499536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.499571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.499693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.499727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.499906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.499941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.500115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.500177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.500367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.500406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.500560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.500596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.500753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.500797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.500976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.501880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.501982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.502169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.502345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.502561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.502722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.502908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.912 [2024-10-14 13:46:27.502943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.912 qpair failed and we were unable to recover it. 00:35:35.912 [2024-10-14 13:46:27.503065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.503101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.503295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.503331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.503441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.503477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.503622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.503658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.503843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.503878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.504028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.504062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.504213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.504249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.504383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.504432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.504656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.504704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.504890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.504940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.505170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.505220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.505396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.505442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.505665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.505714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.505916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.505983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.506196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.506243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.506437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.506484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.506709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.506758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.507078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.507183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.507410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.507488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.507774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.507839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.508140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.508215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.508406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.508488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.508770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.508834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.509176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.509236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.509501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.509566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.509839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.509903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.510157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.510224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.510377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.510456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.510747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.510811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.511151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.511227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.511422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.511507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.511789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.511853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.512177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.512227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.512443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.512508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.512748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.512815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.513179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.513230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.513429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.513509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.913 [2024-10-14 13:46:27.513753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.913 [2024-10-14 13:46:27.513817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.913 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.514104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.514214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.514480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.514545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.514833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.514898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.515190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.515240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.515491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.515558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.515859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.515933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.516246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.516295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.516585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.516634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.516831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.516882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.517170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.517219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.517423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.517500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.517785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.517848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.518190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.518250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.518441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.518491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.518676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.518728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.518986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.519051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.519381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.519457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.519719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.519784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.520030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.520096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.520370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.520434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.520696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.520745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.520964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.521029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.521285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.521353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.521610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.521675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.521924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.521993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.522292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.522342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.522534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.522582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.522836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.522901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.523121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.523199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.523488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.523579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.523873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.523939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.524182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.524251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.524551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.524626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.524926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.524992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.525233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.525264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.914 [2024-10-14 13:46:27.525416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.914 [2024-10-14 13:46:27.525445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.914 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.525599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.525628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.525746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.525776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.525900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.525932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.526055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.526085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.526225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.526255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.526405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.526442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.526650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.526715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.527024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.527100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.527326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.527356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.527541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.527590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.527736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.527785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.528015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.528080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.528358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.528423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.528725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.528789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.528999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.529063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.529282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.529346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.529591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.529655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.529902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.529966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.530211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.530279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.530542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.530609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.530906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.530973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.531230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.531296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.531513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.531579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.531847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.531911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.532213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.532279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.532533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.532597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.532848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.532912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.533150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.533216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.533501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.533566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.533844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.533892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.534090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.534181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.534482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.534547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.915 [2024-10-14 13:46:27.534830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.915 [2024-10-14 13:46:27.534894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.915 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.535098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.535187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.535400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.535467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.535762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.535826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.536079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.536172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.536467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.536532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.536794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.536843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.536983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.537031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.537352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.537418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.537660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.537726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.537996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.538061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.538374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.538448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.538754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.538818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.539103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.539191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.539457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.539517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.539760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.539825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.540044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.540111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.540438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.540515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.540806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.540876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.541188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.541255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.541566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.541641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.542856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.542892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.543039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.543071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.543283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.543346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.543518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.543572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.543778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.543810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.543969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.544001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.544236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.544290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.544547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.544610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.544839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.544893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.545924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.545956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.546053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.916 [2024-10-14 13:46:27.546083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.916 qpair failed and we were unable to recover it. 00:35:35.916 [2024-10-14 13:46:27.546210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.546241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.546395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.546437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.546584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.546645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.546816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.546847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.546965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.547962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.547993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.548135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.548167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.548259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.548289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.548386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.548417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.548558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.548589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.551261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.551310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.551553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.551605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.551840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.551894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.552044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.552075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.552241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.552273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.552476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.552542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.552733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.552783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.552904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.552936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.553963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.553996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.554167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.554199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.554351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.554382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.554636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.554689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.554788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.554820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.554955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.554985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.555133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.555164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.917 qpair failed and we were unable to recover it. 00:35:35.917 [2024-10-14 13:46:27.555294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.917 [2024-10-14 13:46:27.555324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.555439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.555470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.555612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.555643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.555782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.555813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.555939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.555971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.556844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.556875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.557909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.557939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.558963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.558993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.559910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.559940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.560098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.560143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.560275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.560306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.560412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.560444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.560567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.560597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.560755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.560786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.561305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.561349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.561514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.918 [2024-10-14 13:46:27.561547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.918 qpair failed and we were unable to recover it. 00:35:35.918 [2024-10-14 13:46:27.561704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.561733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.561841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.561870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.562364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.562406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.562635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.562715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.562911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.562940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.563074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.563103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.563242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.563271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.563487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.563563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.563862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.563930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.564113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.564267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.564408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.564556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.564737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.564992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.565021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.565182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.565211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.565313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.565342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.565547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.565612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.565914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.565991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.566253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.566283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.566374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.566403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.566620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.566686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.566908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.566973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.567193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.567223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.567352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.567381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.567490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.567519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.567644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.567672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.567763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.567799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.568005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.568070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.568240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.568268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.568415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.568451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.568697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.568763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.568883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.568912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.569031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.569059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.569167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.569197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.919 [2024-10-14 13:46:27.569290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.919 [2024-10-14 13:46:27.569318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.919 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.569402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.569431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.569564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.569592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.569801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.569868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.570194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.570224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.570348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.570376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.570503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.570573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.570832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.570895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.571887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.571915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.572052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.572227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.572369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.572532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.572738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.572976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.573138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.573316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.573469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.573615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.573773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.573814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.574159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.574213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.574336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.574366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.574464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.574500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.574679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.574715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.574866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.574902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.920 [2024-10-14 13:46:27.575779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.920 [2024-10-14 13:46:27.575809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.920 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.575900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.575928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.576814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.576872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.577087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.577123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.577257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.577286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.577438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.577491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.577609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.577688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.577924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.577994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.578228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.578257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.578385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.578413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.578571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.578647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.578946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.579259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.579409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.579534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.579713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.579882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.579911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.580938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.580967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.581091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.581138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.581233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.581262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.581364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.581393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.581499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.581528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.921 [2024-10-14 13:46:27.581627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.921 [2024-10-14 13:46:27.581666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.921 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.581756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.581785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.581881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.581910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.582862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.582891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.583860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.583896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.584218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.584261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.584389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.584419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.584581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.584618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.584770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.584806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.584970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.585010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.585103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.585149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.585275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.585305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.585451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.585491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.585618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.585675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.585936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.586002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.586231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.586260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.586358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.586386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.586524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.586552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.586824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.586889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.587087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.587118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.587236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.587266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.588515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.588549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.588714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.922 [2024-10-14 13:46:27.588743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.922 qpair failed and we were unable to recover it. 00:35:35.922 [2024-10-14 13:46:27.588873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.588902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.589955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.589985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.590886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.590987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.591942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.591982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.592950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.592979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.593890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.593920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.594026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.594055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.595088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.595148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.923 [2024-10-14 13:46:27.595251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.923 [2024-10-14 13:46:27.595279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.923 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.595403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.595431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.595556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.595583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.595672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.595699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.595826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.595853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.595940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.595966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.596899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.596979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.597924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.597955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.598939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.598968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.599058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.599086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.599201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.599243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.599368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.924 [2024-10-14 13:46:27.599400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.924 qpair failed and we were unable to recover it. 00:35:35.924 [2024-10-14 13:46:27.599543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.599572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.599694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.599722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.599898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.599954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.600893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.600992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.601913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.601941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.602887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.602915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.603829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.603857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.604012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.604040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.604143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.604189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.604277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.604305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.604451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.604479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.925 qpair failed and we were unable to recover it. 00:35:35.925 [2024-10-14 13:46:27.604648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.925 [2024-10-14 13:46:27.604684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.604817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.604847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.604968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.605881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.605909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.606853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.606881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.607886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.607969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.608835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.608976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.609114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.609298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.609452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.609563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.609732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.609768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.610022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.610070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.610248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.610277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.610367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.610394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.610519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.610547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.926 [2024-10-14 13:46:27.610672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.926 [2024-10-14 13:46:27.610722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.926 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.610828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.610856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.610967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.610996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.611893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.611923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.612841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.612885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.613895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.613922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.614906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.614932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.615044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.615076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.615176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.615204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.615296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.615324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.616320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.616354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.616488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.616517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.616635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.927 [2024-10-14 13:46:27.616673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.927 qpair failed and we were unable to recover it. 00:35:35.927 [2024-10-14 13:46:27.616789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.616816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.616931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.616958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.617852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.617972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.618886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.618912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.619875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.619901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.620946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.620972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.621068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.621105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.621214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.621253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.621348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.621377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.928 [2024-10-14 13:46:27.621508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.928 [2024-10-14 13:46:27.621535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.928 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.621638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.621665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.621749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.621775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.621859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.621885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.621976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.622845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.622873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.623969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.623995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.624897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.624923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.625964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.625991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.929 [2024-10-14 13:46:27.626103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.929 [2024-10-14 13:46:27.626142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.929 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.626872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.626898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.627952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.627978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.628876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.628996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.629939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.629965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.630886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.630912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.631026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.631051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.631196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.631224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.631333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.930 [2024-10-14 13:46:27.631359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.930 qpair failed and we were unable to recover it. 00:35:35.930 [2024-10-14 13:46:27.631455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.631481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.631625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.631651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.631733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.631759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.631849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.631875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.632965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.632990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.633909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.633936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.634932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.634959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.635940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.635966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.636095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.636121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.636212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.636239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.636333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.636358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.636485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.931 [2024-10-14 13:46:27.636512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.931 qpair failed and we were unable to recover it. 00:35:35.931 [2024-10-14 13:46:27.636666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.636690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.636796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.636823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.636967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.636993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.637150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.637320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.637459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.637681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.637859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.637988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.638872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.638989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.639878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.639906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.640915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.640944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.932 [2024-10-14 13:46:27.641830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.932 qpair failed and we were unable to recover it. 00:35:35.932 [2024-10-14 13:46:27.641946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.641971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.642953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.642992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.643944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.643969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.644877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.644904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.645873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.645988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.646970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.646995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.647152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.647179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.647288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.647314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.647429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.647456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.647602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.647646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.933 [2024-10-14 13:46:27.647809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-10-14 13:46:27.647854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.933 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.647957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.647986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.648927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.648953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.649899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.649925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.650949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.650974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.651860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.651886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.652013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.652040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.652155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.652187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.934 qpair failed and we were unable to recover it. 00:35:35.934 [2024-10-14 13:46:27.652288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-10-14 13:46:27.652316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.652474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.652502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.652591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.652620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.652723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.652751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.652874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.652902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.653953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.653980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.654966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.654992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.655933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.655962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.656107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.656300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.656500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.656672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.656870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.656995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.657918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.657961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.658118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.658152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.935 [2024-10-14 13:46:27.658272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.935 [2024-10-14 13:46:27.658298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.935 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.658413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.658439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.658559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.658585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.658700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.658726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.658866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.658892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.659884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.659913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.660894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.660938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.661937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.661964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.662880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.662999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.663900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.663926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.936 [2024-10-14 13:46:27.664042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.936 [2024-10-14 13:46:27.664069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.936 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.664224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.664341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.664532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.664688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.664836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.664987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.665145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.665307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.665509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.665659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.665846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.665889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.666900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.666992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.667945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.667971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.668929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.668957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.669081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.669109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.669224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.669251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.937 [2024-10-14 13:46:27.669378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.937 [2024-10-14 13:46:27.669405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.937 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.669529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.669557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.669667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.669694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.669844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.669871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.669996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.670858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.670886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.671867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.671992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.672952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.672980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.673940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.673967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.674906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.674934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.675086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.675114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.938 qpair failed and we were unable to recover it. 00:35:35.938 [2024-10-14 13:46:27.675253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.938 [2024-10-14 13:46:27.675279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.675438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.675482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.675644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.675688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.675802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.675846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.675979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.676888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.676915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.677969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.677995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.678901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.678928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.679918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.679943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.680023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.680050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.680157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.680184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.680332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.680358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.680492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.680537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.939 [2024-10-14 13:46:27.680669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.939 [2024-10-14 13:46:27.680695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.939 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.680839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.680865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.681950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.681976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.682877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.682907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.683945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.683970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.684948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.684975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.685915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.685941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.686029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.686067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.686199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.686228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.686346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.686373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.686544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.940 [2024-10-14 13:46:27.686574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.940 qpair failed and we were unable to recover it. 00:35:35.940 [2024-10-14 13:46:27.686716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.686744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.686881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.686910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.687930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.687956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.688893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.688918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.689896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.689985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.690939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.690973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.691964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.691995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.692102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.692135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.692254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.692281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.692524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.692552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.692683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.692714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.692870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.692901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.693032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.941 [2024-10-14 13:46:27.693062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.941 qpair failed and we were unable to recover it. 00:35:35.941 [2024-10-14 13:46:27.693204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.693231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.693377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.693403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.693548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.693574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.693728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.693758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.693867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.693912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.694877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.694906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.695853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.695878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.696915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.696941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.697899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.697924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.698884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.698910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.699023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.942 [2024-10-14 13:46:27.699048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.942 qpair failed and we were unable to recover it. 00:35:35.942 [2024-10-14 13:46:27.699215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.699391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.699555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.699681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.699802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.699944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.699970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.700904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.700989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.701906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.701933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.702928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.702961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.943 [2024-10-14 13:46:27.703803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.943 qpair failed and we were unable to recover it. 00:35:35.943 [2024-10-14 13:46:27.703936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.703962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.704876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.704905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.705870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.705898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.706897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.706923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.707033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.707064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.707187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.707214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.707331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.707356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.944 qpair failed and we were unable to recover it. 00:35:35.944 [2024-10-14 13:46:27.707443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.944 [2024-10-14 13:46:27.707469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.707578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.707604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.707709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.707735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.707824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.707850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.707999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.708809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.708839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.709968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.709999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.710165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.710209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.710296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.710323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.710473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.710515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.710608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.710638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.710807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.710854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.711848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.711878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.712907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.712953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.713077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.713223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.713412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.713621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.945 [2024-10-14 13:46:27.713836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.945 qpair failed and we were unable to recover it. 00:35:35.945 [2024-10-14 13:46:27.713984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.714961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.714986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.715929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.715957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.716844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.716871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.717915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.717941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.718956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.718981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.946 [2024-10-14 13:46:27.719958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.946 [2024-10-14 13:46:27.719983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.946 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.720168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.720314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.720486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.720692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.720838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.720962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.721974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.721999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.722973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.722999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.723864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.723986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.724014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.724107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.724158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:35.947 [2024-10-14 13:46:27.724346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.947 [2024-10-14 13:46:27.724388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:35.947 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.724511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.724560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.724683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.724712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.724828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.724856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.724955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.724982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.725867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.725895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.726892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.726921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.727910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.727989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.728966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.728991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.729070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.729096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.729244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.729288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.729419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.729453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.729631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.729669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.729885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.729916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.730861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.730990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.731934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.731961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.732073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.732100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.732232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.732260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.732386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.732415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.232 [2024-10-14 13:46:27.732533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.232 [2024-10-14 13:46:27.732562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.232 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.732667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.732694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.732842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.732870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.732992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.733910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.733940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.734912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.734937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.735964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.735988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.736845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.736873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.737919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.737946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.738963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.738989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.739925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.739951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.740884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.740909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.741929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.741955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.742898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.742923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.743038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.743062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.743147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.743173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.743312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.743337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.233 qpair failed and we were unable to recover it. 00:35:36.233 [2024-10-14 13:46:27.743452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.233 [2024-10-14 13:46:27.743481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.743603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.743627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.743755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.743780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.743894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.743920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.744881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.744906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.745939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.745964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.746916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.746940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.747955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.747979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.748906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.748931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.749919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.749950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.750103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.750264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.750499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.750653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.750861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.750979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.751939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.751969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.752125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.752356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.752517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.752672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.752877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.752993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.234 qpair failed and we were unable to recover it. 00:35:36.234 [2024-10-14 13:46:27.753956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.234 [2024-10-14 13:46:27.753980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.754930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.754955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.755937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.755963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.756839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.756864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.757856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.757881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.758864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.758889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.759957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.759983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.760907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.760933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.761973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.761999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.762944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.762970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.763961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.763987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.764874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.235 [2024-10-14 13:46:27.764992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.235 [2024-10-14 13:46:27.765017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.235 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.765916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.765941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.766931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.766957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.767756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.767781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.768170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.768201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.768323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.768353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.768475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.768505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.768635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.768665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.768847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.768891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.769956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.769982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.770901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.770925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.771887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.771913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.772872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.772996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.773889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.773919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.774047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.774078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.774269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.774296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.774422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.774452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.774576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.774607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.774766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.774797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.775914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.236 [2024-10-14 13:46:27.775957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.236 qpair failed and we were unable to recover it. 00:35:36.236 [2024-10-14 13:46:27.776071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.776929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.776956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.777888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.777913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.778885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.778910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.779946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.779973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.780971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.780997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.781929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.781954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.782952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.782977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.783923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.783952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.784941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.784967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.785933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.785959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.786973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.786998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.787142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.787168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.237 qpair failed and we were unable to recover it. 00:35:36.237 [2024-10-14 13:46:27.787276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.237 [2024-10-14 13:46:27.787300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.787412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.787445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.787522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.787546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.787667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.787693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.787836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.787861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.787979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.788874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.788987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.789107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.789292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.789530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.789688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.789844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.789874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.790864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.790990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.791878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.791999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.792901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.792984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.793953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.793983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.794148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.794191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.794337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.794363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.794476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.794506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.794659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.794690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.794817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.794848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.795894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.795920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.796883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.796911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.797889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.797915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.798962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.798989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.238 [2024-10-14 13:46:27.799104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.238 [2024-10-14 13:46:27.799137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.238 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.799277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.799459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.799596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.799755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.799883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.799978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.800897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.800927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.801884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.801994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.802863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.802988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.803878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.803974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.804906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.804937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.805845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.805875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.806799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.806972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.807143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.807280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.807477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.807683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.807863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.807893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.808940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.808966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.809848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.809874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.239 qpair failed and we were unable to recover it. 00:35:36.239 [2024-10-14 13:46:27.810017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.239 [2024-10-14 13:46:27.810043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.810162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.810189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.810308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.810334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.810471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.810514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.810628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.810657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.810850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.810903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.811966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.811996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.812149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.812328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.812465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.812641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.812779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.812956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.813875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.813999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.814026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.814126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.814157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.814242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.814269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.814399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.814434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.814697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.814760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.815030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.815105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.815275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.815302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.815413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.815442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.815633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.815697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.815975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.816040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.816269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.816296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.816403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.816450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.816566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.816594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.816834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.816898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.817101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.817187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.817309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.817335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.817448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.817474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.817654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.817719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.817999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.818063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.818285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.818312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.818395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.818444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.818583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.818631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.818864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.818929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.819227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.819254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.819348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.819374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.819477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.819504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.819668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.819734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.819896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.819972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.820161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.820205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.820299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.820326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.820410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.820442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.820551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.820577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.820784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.820866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.821961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.821993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.822097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.822143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.822279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.822306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.822394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.822431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.822581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.822647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.822807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.822875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.240 [2024-10-14 13:46:27.823808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.240 [2024-10-14 13:46:27.823840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.240 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.823950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.823982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.824808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.824855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.825819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.825981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.826141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.826271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.826389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.826613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.826857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.826922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.827154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.827214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.827300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.827328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.827409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.827436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.827547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.827573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.827757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.827821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.828824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.828972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.829944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.829977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.830918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.830950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.831965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.831991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.832093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.832145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.832241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.832270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.832355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.832382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.832540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.832608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.832842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.832908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.833180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.833208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.833301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.833329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.833412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.833443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.833620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.833699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.833875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.833943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.834103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.834153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.834252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.834279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.834396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.834423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.834540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.834588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.834880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.834946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.835124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.835156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.835275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.835302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.835389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.835436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.835655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.835720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.241 [2024-10-14 13:46:27.835977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.241 [2024-10-14 13:46:27.836041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.241 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.836237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.836264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.836375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.836401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.836531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.836557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.836702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.836734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.836897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.836956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.837203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.837346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.837473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.837604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.837802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.837991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.838058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.838283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.838310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.838442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.838474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.838634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.838661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.838776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.838802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.839090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.839182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.839298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.839324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.839443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.839469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.839583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.839614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.839810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.839841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.840787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.840848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.841831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.841901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.842927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.842991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.843941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.843972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.844904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.844946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.845079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.845112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.845239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.845266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.845377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.845433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.845567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.845637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.845926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.845992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.846223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.846251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.846349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.846375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.846532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.846558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.846695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.846771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.847012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.847078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.847264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.847291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.847407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.847444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.847618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.847683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.847896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.847959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.848187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.848231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.848322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.848348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.848439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.848466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.848572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.848599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.848754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.848829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.849140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.849319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.849439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.849581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.849714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.849943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.850007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.850206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.242 [2024-10-14 13:46:27.850233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.242 qpair failed and we were unable to recover it. 00:35:36.242 [2024-10-14 13:46:27.850345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.850372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.850489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.850520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.850773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.850837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.851200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.851227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.851338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.851366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.851478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.851505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.851662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.851706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.851908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.851972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.852171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.852198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.852339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.852365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.852529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.852594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.852860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.852927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.853185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.853212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.853328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.853354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.853504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.853571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.853779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.853843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.854123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.854203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.854420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.854496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.854752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.854817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.855068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.855158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.855406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.855473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.855704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.855769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.856067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.856154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.856466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.856540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.856785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.856852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.857106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.857187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.857442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.857507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.857764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.857828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.858054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.858120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.858429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.858504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.858776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.858841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.859102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.859224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.859528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.859603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.859863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.859929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.860116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.860197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.860411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.860476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.860734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.860799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.861052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.861119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.861427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.861492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.861745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.861810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.862048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.862113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.862391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.862467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.862762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.862827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.863071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.863162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.863464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.863529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.863748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.863813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.864083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.864166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.864469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.864534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.864824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.864890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.865154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.865219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.865484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.865549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.865849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.865924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.866216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.866285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.866506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.866571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.866807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.866871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.867189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.867255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.867464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.867532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.867768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.867833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.868100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.868187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.868437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.868502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.868748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.868812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.869019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.869083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.869360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.869438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.869692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.869757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.870024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.870089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.243 [2024-10-14 13:46:27.870407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.243 [2024-10-14 13:46:27.870472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.243 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.870717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.870784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.871047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.871111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.871375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.871448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.871742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.871816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.872027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.872093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.872357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.872434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.872699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.872773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.873025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.873090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.873378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.873447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.873650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.873714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.873999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.874064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.874364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.874440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.874704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.874768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.874999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.875065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.875273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.875340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.875583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.875650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.875862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.875928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.876218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.876285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.876487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.876554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.876812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.876876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.877102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.877184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.877392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.877459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.877719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.877783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.878033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.878099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.878384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.878449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.878765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.878837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.879119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.879191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.879419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.879497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.879730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.879789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.880059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.880115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.880339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.880395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.880573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.880629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.880856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.880913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.881176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.881234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.881429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.881485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.881662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.881718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.881888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.881944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.882123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.882214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.882456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.882517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.882775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.882857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.883102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.883184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.883371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.883432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.883696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.883771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.884011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.884067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.884310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.884369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.884677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.884733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.885029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.885114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.885356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.885413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.885681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.885740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.885948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.886022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.886282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.886342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.886558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.886633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.886942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.886998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.887181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.887241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.887535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.887616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.887887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.887946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.888198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.888276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.888489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.888564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.888852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.888936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.889180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.889259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.889559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.889646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.889915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.889971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.890163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.890221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.890526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.890606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.890807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.890865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.891053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.891109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.891379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.891454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.891749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.891831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.892039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.892095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.892364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.892439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.892682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.892758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.893003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.893058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.893320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.893396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.893620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.893677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.893845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.893901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.244 qpair failed and we were unable to recover it. 00:35:36.244 [2024-10-14 13:46:27.894178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.244 [2024-10-14 13:46:27.894235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.894450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.894526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.894799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.894855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.895073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.895147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.895437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.895524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.895793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.895870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.896094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.896168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.896441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.896518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.896806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.896879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.897146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.897204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.897423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.897499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.897742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.897827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.898070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.898154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.898404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.898480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.898700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.898784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.898999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.899057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.899383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.899471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.899759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.899833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.900010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.900068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.900308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.900384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.900628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.900702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.900921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.900977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.901170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.901229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.901513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.901596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.901886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.901962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.902171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.902229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.902479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.902553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.902849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.902924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.903220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.903296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.903577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.903661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.903873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.903929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.904114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.904186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.904412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.904468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.904685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.904743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.904975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.905034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.905282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.905359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.905607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.905683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.905897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.905953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.906180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.906239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.906509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.906596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.906850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.906907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.907107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.907201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.907484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.907561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.907843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.907901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.908159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.908216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.908434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.908520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.908820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.908901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.909083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.909151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.909399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.909456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.909700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.909774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.909968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.910024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.910310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.910387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.910597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.910674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.910892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.910948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.911148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.911206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.911394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.911452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.911671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.911729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.911986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.912043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.912262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.912319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.912580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.912636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.912827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.912883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.913148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.913205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.913434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.913511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.913703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.913782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.913997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.914054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.914342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.914418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.914660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.914742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.914964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.915021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.915226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.915283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.915502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.915560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.915789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.915845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.916018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.916077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.916276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.916334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.916590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.916647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.916812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.916869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.917031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.917094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.917304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.245 [2024-10-14 13:46:27.917361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.245 qpair failed and we were unable to recover it. 00:35:36.245 [2024-10-14 13:46:27.917553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.917608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.917824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.917890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.918176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.918234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.918447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.918504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.918728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.918784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.918984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.919040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.919282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.919340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.919540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.919618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.919873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.919948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.920188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.920269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.920560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.920646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.920877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.920933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.921152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.921209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.921466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.921541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.921771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.921852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.922102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.922172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.922390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.922471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.922684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.922763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.923010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.923067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.923321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.923397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.923608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.923685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.923870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.923925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.924191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.924249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.924455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.924532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.924720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.924776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.924952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.925010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.925209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.925291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.925525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.925600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.925832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.925890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.926116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.926198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.926420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.926495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.926765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.926841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.927099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.927170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.927421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.927496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.927792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.927867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.928084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.928158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.928417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.928494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.928800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.928874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.929109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.929181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.929403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.929478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.929751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.929824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.930056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.930121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.930368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.930444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.930700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.930775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.931012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.931067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.931385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.931461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.931749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.931825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.932079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.932150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.932385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.932460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.932705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.932780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.933015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.933070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.933288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.933368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.933630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.933719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.933941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.933997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.934270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.934348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.934538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.934596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.934853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.934928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.935188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.935263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.935572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.935653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.935854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.935911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.936117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.936185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.936397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.936472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.936697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.936754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.936976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.937032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.937257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.937333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.937524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.937582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.937755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.937813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.938072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.938164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.938398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.938454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.938671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.938728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.938920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.246 [2024-10-14 13:46:27.938976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.246 qpair failed and we were unable to recover it. 00:35:36.246 [2024-10-14 13:46:27.939171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.939229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.939445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.939502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.939745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.939821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.940030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.940086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.940320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.940395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.940638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.940714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.940972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.941028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.941306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.941392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.941694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.941770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.941989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.942046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.942340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.942427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.942598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.942656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.942882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.942940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.943146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.943203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.943457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.943533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.943773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.943849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.944078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.944152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.944394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.944469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.944710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.944785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.945037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.945092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.945364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.945439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.945724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.945799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.946063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.946120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.946441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.946516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.946799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.946873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.947096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.947171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.947378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.947457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.947700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.947776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.947961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.948016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.948267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.948344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.948566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.948641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.948892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.948948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.949233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.949319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.949612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.949687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.949877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.949934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.950143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.950206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.950419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.950476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.950694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.950750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.951010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.951066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.951291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.951348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.951533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.951590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.951813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.951869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.952086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.952157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.952357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.952413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.952631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.952687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.952903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.952959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.953186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.953243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.953459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.953516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.953750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.953824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.954048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.954106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.954349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.954438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.954684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.954757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.954972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.955031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.955318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.955393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.955576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.955635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.955852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.955910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.956151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.956208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.956460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.956535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.956801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.956876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.957054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.957110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.957334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.957410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.957699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.957783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.957959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.958016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.958227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.958308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.958570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.958644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.958898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.958955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.959212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.959289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.959485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.959562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.959821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.959878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.960101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.960169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.960383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.960439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.960659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.960714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.960905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.960960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.961190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.961247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.961466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.961524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.961812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.247 [2024-10-14 13:46:27.961896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.247 qpair failed and we were unable to recover it. 00:35:36.247 [2024-10-14 13:46:27.962143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.962201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.962381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.962438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.962690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.962765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.962940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.962996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.963250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.963326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.963523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.963599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.963851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.963929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.964216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.964294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.964592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.964668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.964930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.964986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.965202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.965281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.965543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.965599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.965798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.965854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.966064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.966120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.966367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.966453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.966721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.966796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.967018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.967074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.967297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.967372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.967594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.967650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.967839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.967896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.968172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.968229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.968518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.968601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.968819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.968895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.969080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.969147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.969397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.969472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.969732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.969790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.970073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.970141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.970400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.970476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.970677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.970736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.970991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.971047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.971332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.971389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.971645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.971727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.971991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.972046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.972262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.972337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.972637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.972712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.972987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.973044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.973286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.973362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.973605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.973686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.973914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.973972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.974213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.974293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.974531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.974607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.974823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.974896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.975155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.975213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.975421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.975497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.975737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.975812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.976031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.976089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.976360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.976437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.976617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.976694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.976861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.976919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.977202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.977279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.977527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.977603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.977793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.977851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.978054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.978110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.978409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.978485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.978770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.978866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.979095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.979169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.979390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.979471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.979677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.979752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.980012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.980067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.980332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.980390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.980581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.980656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.980872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.980947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.981163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.981220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.981427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.981503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.981746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.981824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.982035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.982091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.982310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.982387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.982673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.982759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.983009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.983065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.983293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.983370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.983582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.983660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.983916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.248 [2024-10-14 13:46:27.983972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.248 qpair failed and we were unable to recover it. 00:35:36.248 [2024-10-14 13:46:27.984178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.984235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.984470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.984528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.984836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.984910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.985091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.985163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.985372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.985450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.985662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.985738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.985993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.986049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.986306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.986382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.986687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.986762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.987024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.987081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.987383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.987470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.987770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.987846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.988028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.988084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.988403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.988502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.988814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.988884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.989205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.989265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.989529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.989595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.989895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.989978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.990257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.990314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.990604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.990669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.990929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.991004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.991263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.991320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.991570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.991635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.991938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.992003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.992268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.992325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.992551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.992607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.992823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.992888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.993164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.993241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.993494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.993563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.993860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.993924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.994188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.994245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.994528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.994593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.994849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.994914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.995166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.995224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.995424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.995480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.995676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.995740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.995996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.996063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.996372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.996428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.996706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.996771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.996966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.997032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.997313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.997369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.997600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.997656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.997954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.998028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.998313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.998370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.998632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.998697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.998993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.999048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.999254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.999310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.999592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.999647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:27.999885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:27.999950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.000244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.000302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.000553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.000620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.000844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.000909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.001172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.001231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.001397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.001453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.001705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.001771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.002022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.002087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.002346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.002413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.002743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.002809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.003054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.003119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.003355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.003423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.003710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.003775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.004026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.004091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.004417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.004484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.004780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.004856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.005101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.005187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.005431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.005497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.005755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.005820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.006057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.006121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.006399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.006466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.006725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.006791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.007004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.007069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.007307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.007373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.007624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.007689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.007890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.007954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.008212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.008279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.008488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.008552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.008806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.008871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.249 [2024-10-14 13:46:28.009184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.249 [2024-10-14 13:46:28.009261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.249 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.009501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.009565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.009809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.009874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.010126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.010207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.010422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.010486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.010724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.010789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.010994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.011059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.011296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.011360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.011611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.011676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.011884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.011950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.012180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.012247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.012505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.012572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.012812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.012877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.013148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.013216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.013470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.013535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.013782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.013849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.014143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.014209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.014456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.014521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.014821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.014884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.015141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.015209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.015467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.015532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.015739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.015803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.016065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.016148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.016380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.016446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.016696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.016760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.016985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.017050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.017360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.017438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.017683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.017757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.018018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.018083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.018394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.018460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.018740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.018804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.019043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.019109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.019358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.019424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.019717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.019782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.020028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.020093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.020415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.020480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.020689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.020756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.020982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.021047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.021284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.021350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.021642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.021707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.021952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.022019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.022348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.022414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.022703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.022768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.023052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.023115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.023367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.023431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.023722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.023789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.024087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.024169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.024439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.024505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.024806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.024882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.025087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.025170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.025466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.025531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.025814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.025879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.026118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.026200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.026453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.026518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.026792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.026866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.027117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.027203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.027407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.027475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.027782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.027854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.028099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.028202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.028458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.028523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.028817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.028881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.029121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.029206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.029465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.029530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.029768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.029834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.030085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.030178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.030468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.030532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.030731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.030799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.031054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.031120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.031430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.031494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.031706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.031771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.032072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.032154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.032402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.032468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.032756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.032821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.033107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.250 [2024-10-14 13:46:28.033192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.250 qpair failed and we were unable to recover it. 00:35:36.250 [2024-10-14 13:46:28.033414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.033479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.033723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.033788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.034043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.034111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.034439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.034513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.034813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.034878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.035147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.035217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.035430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.035496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.035709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.035777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.036090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.036190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.036410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.036475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.036722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.036787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.037078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.037169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.037424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.037489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.037787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.037853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.038160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.038226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.038515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.038580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.038856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.038922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.039151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.039217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.039508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.039574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.039818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.039885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.040180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.040247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.040466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.040541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.040823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.040888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.041143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.041209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.041390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.041454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.041669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.041735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.041974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.042039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.042277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.042344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.042562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.042626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.042877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.042942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.043187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.043254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.043477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.043542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.043752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.043819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.044111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.044219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.044430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.044496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.044803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.044878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.045164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.045231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.045439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.045504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.045794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.045858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.046110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.046189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.046480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.046545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.046831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.046896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.047191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.047257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.047513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.047579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.047871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.047936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.048220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.048287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.048540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.048605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.048867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.048932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.049231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.049310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.049571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.049636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.049890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.049955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.050239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.050306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.050574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.050638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.050899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.050964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.051253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.051319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.051605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.051669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.051924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.051988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.052240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.052307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.052565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.052629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.052871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.052936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.053157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.053226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.053523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.053588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.053899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.053965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.054226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.054293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.054542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.054606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.054899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.054964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.055207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.055274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.055532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.055598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.055899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.055964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.056247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.056314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.056515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.056579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.056834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.056899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.057147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.057214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.057519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.057594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.057849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.057914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.058164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.058239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.058489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.058580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.058905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.251 [2024-10-14 13:46:28.059002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.251 qpair failed and we were unable to recover it. 00:35:36.251 [2024-10-14 13:46:28.059317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.059388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.059702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.059778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.060036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.060102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.060359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.060690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.060755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.060960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.061026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.061334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.061401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.061602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.061668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.061931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.061996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.062283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.062349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.062592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.062659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.062877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.062957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.063264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.063368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.063670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.063743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.064009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.064078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.064393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.064469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.064735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.064800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.065056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.065121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.065386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.065452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.065666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.065734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.066027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.066093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.066363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.066428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.066661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.066727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.066971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.067039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.067327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.067394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.067708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.067774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.068026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.068091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.068381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.068447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.068646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.068711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.068926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.068991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.069280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.069347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.069591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.069656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.069908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.069972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.070226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.070292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.070593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.070668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.070885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.070950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.071185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.071252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.071464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.071531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.071790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.071869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.072164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.072231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.072435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.072517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.072826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.072924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.252 [2024-10-14 13:46:28.073274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.252 [2024-10-14 13:46:28.073346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.252 qpair failed and we were unable to recover it. 00:35:36.529 [2024-10-14 13:46:28.073621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.529 [2024-10-14 13:46:28.073688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.529 qpair failed and we were unable to recover it. 00:35:36.529 [2024-10-14 13:46:28.073984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.074049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.074326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.074394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.074665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.074731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.074942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.075007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.075260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.075327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.075614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.075681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-10-14 13:46:28.075975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.530 [2024-10-14 13:46:28.076040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.076331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.076398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.076670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.076739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.076998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.077062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.077301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.077369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.077632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.077699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.077960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.078026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.078292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.078359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.078606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.531 [2024-10-14 13:46:28.078670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.531 qpair failed and we were unable to recover it. 00:35:36.531 [2024-10-14 13:46:28.078912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-14 13:46:28.078977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.532 qpair failed and we were unable to recover it. 00:35:36.532 [2024-10-14 13:46:28.079176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-14 13:46:28.079243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.532 qpair failed and we were unable to recover it. 00:35:36.532 [2024-10-14 13:46:28.079454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-14 13:46:28.079519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.532 qpair failed and we were unable to recover it. 00:35:36.532 [2024-10-14 13:46:28.079766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-14 13:46:28.079831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.532 qpair failed and we were unable to recover it. 00:35:36.532 [2024-10-14 13:46:28.080075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.532 [2024-10-14 13:46:28.080153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.532 qpair failed and we were unable to recover it. 00:35:36.532 [2024-10-14 13:46:28.080433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.080497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.080738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.080803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.081108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.081191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.081456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.081520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.081770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.081837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.082104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.082186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.082444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.082508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.533 [2024-10-14 13:46:28.082769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.533 [2024-10-14 13:46:28.082834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.533 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.083094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.534 [2024-10-14 13:46:28.083178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.534 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.083432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.534 [2024-10-14 13:46:28.083497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.534 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.083739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.534 [2024-10-14 13:46:28.083804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.534 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.084098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.534 [2024-10-14 13:46:28.084199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.534 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.084455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.534 [2024-10-14 13:46:28.084519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.534 qpair failed and we were unable to recover it. 00:35:36.534 [2024-10-14 13:46:28.084794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.084859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.535 [2024-10-14 13:46:28.085168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.085248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.535 [2024-10-14 13:46:28.085449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.085525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.535 [2024-10-14 13:46:28.085780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.085846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.535 [2024-10-14 13:46:28.086042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.086110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.535 [2024-10-14 13:46:28.086350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.535 [2024-10-14 13:46:28.086416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.535 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.086621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.086687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.086943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.087010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.087251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.087318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.087577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.087644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.087900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.087967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.088171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.536 [2024-10-14 13:46:28.088238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.536 qpair failed and we were unable to recover it. 00:35:36.536 [2024-10-14 13:46:28.088462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.537 [2024-10-14 13:46:28.088528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.537 qpair failed and we were unable to recover it. 00:35:36.537 [2024-10-14 13:46:28.088741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.537 [2024-10-14 13:46:28.088807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.537 qpair failed and we were unable to recover it. 00:35:36.537 [2024-10-14 13:46:28.089097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.537 [2024-10-14 13:46:28.089190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.537 qpair failed and we were unable to recover it. 00:35:36.537 [2024-10-14 13:46:28.089485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.538 [2024-10-14 13:46:28.089551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.538 qpair failed and we were unable to recover it. 00:35:36.538 [2024-10-14 13:46:28.089855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.538 [2024-10-14 13:46:28.089921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.538 qpair failed and we were unable to recover it. 00:35:36.538 [2024-10-14 13:46:28.090120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.538 [2024-10-14 13:46:28.090207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.538 qpair failed and we were unable to recover it. 00:35:36.538 [2024-10-14 13:46:28.090484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.538 [2024-10-14 13:46:28.090550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.538 qpair failed and we were unable to recover it. 00:35:36.538 [2024-10-14 13:46:28.090766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.538 [2024-10-14 13:46:28.090831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.538 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.091078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.091168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.091466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.091532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.091816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.091881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.092126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.092225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.092582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.092833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.092898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.093156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.093228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.093449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.539 [2024-10-14 13:46:28.093514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.539 qpair failed and we were unable to recover it. 00:35:36.539 [2024-10-14 13:46:28.093809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.093874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.094146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.094214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.094495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.094561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.094780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.094845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.095085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.095169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.095416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.095481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.095730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.095795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.096087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.540 [2024-10-14 13:46:28.096171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.540 qpair failed and we were unable to recover it. 00:35:36.540 [2024-10-14 13:46:28.096477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.541 [2024-10-14 13:46:28.096554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.541 qpair failed and we were unable to recover it. 00:35:36.541 [2024-10-14 13:46:28.096806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.541 [2024-10-14 13:46:28.096884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.541 qpair failed and we were unable to recover it. 00:35:36.541 [2024-10-14 13:46:28.097149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.541 [2024-10-14 13:46:28.097227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.541 qpair failed and we were unable to recover it. 00:35:36.541 [2024-10-14 13:46:28.097486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.541 [2024-10-14 13:46:28.097552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.541 qpair failed and we were unable to recover it. 00:35:36.541 [2024-10-14 13:46:28.097771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.541 [2024-10-14 13:46:28.097836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.541 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.098082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.098166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.098424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.098491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.098763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.098831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.099122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.099205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.099504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.099580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.099883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.099949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.542 qpair failed and we were unable to recover it. 00:35:36.542 [2024-10-14 13:46:28.100247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.542 [2024-10-14 13:46:28.100314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.100600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.100664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.100959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.101024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.101338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.101405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.101691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.101757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.102020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.102086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.102367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.102432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.102719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.102784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.103058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.103123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.103424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.103490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.103798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.103863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.104165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.104231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.104476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.104542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.543 [2024-10-14 13:46:28.104796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.543 [2024-10-14 13:46:28.104861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.543 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.105168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.105235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.105526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.105592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.105883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.105947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.106212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.106280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.106572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.106638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.106927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.106992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.107239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.107307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.107559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.107626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.107930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.107995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.108290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.108369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.108722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.108792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.109118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.109258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.109545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.109618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.109885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.109953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.110180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.110246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.110499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.110565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.110854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.110921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.111153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.111225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.111471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.111537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.111794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.111862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.112170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.112237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.112539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.112605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.112866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.112933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.113157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.113229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.113532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.113597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.113909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.113975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.114261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.114327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.114596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.114672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.114929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.114995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.115291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.115358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.115653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.544 [2024-10-14 13:46:28.115719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.544 qpair failed and we were unable to recover it. 00:35:36.544 [2024-10-14 13:46:28.115972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.116038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.116311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.116377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.116669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.116734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.116995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.117061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.117341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.117408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.117702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.117768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.118078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.118172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.118474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.118540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.118823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.118889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.119177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.119244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.119498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.119564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.119803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.119869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.120167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.120235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.120493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.120558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.120848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.120913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.121217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.121285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.121574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.121639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.121890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.121957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.122246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.122315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.122557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.122623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.122813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.122879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.123101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.123186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.123487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.123553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.123851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.123917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.124213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.124282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.124533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.124599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.124821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.124886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.125155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.125221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.125465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.125530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.125718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.125786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.126083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.126178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.126471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.126538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.126837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.126902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.127143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.127211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.127464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.127530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.545 [2024-10-14 13:46:28.127792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.545 [2024-10-14 13:46:28.127859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.545 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.128110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.128194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.128448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.128515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.128765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.128831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.129090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.129173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.129472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.129537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.129832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.129898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.130178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.130245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.130487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.130552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.130806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.130872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.131053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.131118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.131409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.131485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.131694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.131760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.132045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.132111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.132420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.132486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.132789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.132854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.133158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.133225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.133479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.133545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.133835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.133901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.134161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.134228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.134481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.134548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.134802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.134868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.135164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.135231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.135457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.135524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.135810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.135876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.136158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.136225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.546 qpair failed and we were unable to recover it. 00:35:36.546 [2024-10-14 13:46:28.136523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.546 [2024-10-14 13:46:28.136589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.136851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.136917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.137163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.137231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.137488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.137554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.137852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.137918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.138121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.138203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.138496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.138563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.138858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.138923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.139209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.139276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.139529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.139597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.139885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.139952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.140178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.140245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.140528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.140594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.140818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.140883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.141141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.141208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.141449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.141515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.141770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.141835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.142118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.142200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.142453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.142519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.142756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.142821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.143071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.143165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.143423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.143492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.143793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.143859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.144162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.144230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.144525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.144590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.144899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.144965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.145233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.145310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.145529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.145597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.145845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.145914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.547 [2024-10-14 13:46:28.146205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.547 [2024-10-14 13:46:28.146271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.547 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.146511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.146576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.146819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.146886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.147195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.147262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.147559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.147624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.147925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.147991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.148297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.148364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.148623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.148688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.148979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.149045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.149313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.149383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.149640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.149705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.150009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.150075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.150352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.150420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.150720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.150786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.151057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.151122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.151370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.151438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.151741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.151806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.152049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.152117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.152405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.152473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.152719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.152783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.153074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.153157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.153359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.153425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.153705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.153769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.154064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.154149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.154404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.154483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.154749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.154814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.155081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.155165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.548 [2024-10-14 13:46:28.155461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.548 [2024-10-14 13:46:28.155527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.548 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.155773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.155837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.156147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.156215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.156472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.156539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.156733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.156798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.156998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.157065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.157318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.157386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.157677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.157744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.157997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.158062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.158340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.158409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.158695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.158762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.159078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.159171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.159480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.159546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.159801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.159869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.160168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.160235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.160488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.160554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.160803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.160869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.161115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.161197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.161502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.161567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.161853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.161919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.162212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.162279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.162542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.162608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.162902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.162969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.163285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.163351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.163616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.163682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.163946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.164013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.164258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.549 [2024-10-14 13:46:28.164325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.549 qpair failed and we were unable to recover it. 00:35:36.549 [2024-10-14 13:46:28.164609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.164676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.164965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.165031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.165299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.165366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.165651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.165717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.165978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.166044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.166309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.166376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.166609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.166676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.166958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.167023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.167317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.167384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.167596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.167662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.167922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.167988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.168191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.168275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.168536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.168602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.168864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.168929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.169221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.169288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.169495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.169561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.169854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.169919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.170161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.170229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.170480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.170547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.170784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.170849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.171111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.171206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.171462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.171528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.171825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.171891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.172177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.172244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.172497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.172563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.550 [2024-10-14 13:46:28.172785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.550 [2024-10-14 13:46:28.172851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.550 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.173045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.173110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.173413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.173479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.173771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.173838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.174085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.174166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.174463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.174530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.174774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.174840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.175148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.175215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.175461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.175527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.175723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.175788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.176072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.176156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.176411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.176477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.176725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.176792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.177032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.177107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.177356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.177421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.177708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.177773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.178019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.178086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.178370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.178437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.178702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.178767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.179060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.179126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.179458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.179524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.179773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.179838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.180040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.180106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.180370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.180437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.180728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.180793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.181081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.181165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.181477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.181543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.551 qpair failed and we were unable to recover it. 00:35:36.551 [2024-10-14 13:46:28.181849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.551 [2024-10-14 13:46:28.181915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.182211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.182279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.182574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.182640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.182897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.182961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.183262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.183330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.183620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.183686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.183977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.184042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.184347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.184414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.184707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.184774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.184986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.185052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.185287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.185354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.185633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.185698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.185914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.185980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.186208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.186275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.186585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.186651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.186933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.186999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.187215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.187281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.552 qpair failed and we were unable to recover it. 00:35:36.552 [2024-10-14 13:46:28.187537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.552 [2024-10-14 13:46:28.187602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.553 qpair failed and we were unable to recover it. 00:35:36.553 [2024-10-14 13:46:28.187888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.553 [2024-10-14 13:46:28.187955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.553 qpair failed and we were unable to recover it. 00:35:36.553 [2024-10-14 13:46:28.188207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.553 [2024-10-14 13:46:28.188273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.553 qpair failed and we were unable to recover it. 00:35:36.553 [2024-10-14 13:46:28.188523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.553 [2024-10-14 13:46:28.188590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.553 qpair failed and we were unable to recover it. 00:35:36.554 [2024-10-14 13:46:28.188877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.554 [2024-10-14 13:46:28.188943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.554 qpair failed and we were unable to recover it. 00:35:36.554 [2024-10-14 13:46:28.189213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.554 [2024-10-14 13:46:28.189278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.554 qpair failed and we were unable to recover it. 00:35:36.554 [2024-10-14 13:46:28.189531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.554 [2024-10-14 13:46:28.189597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.554 qpair failed and we were unable to recover it. 00:35:36.554 [2024-10-14 13:46:28.189841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.189907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.190200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.190268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.190527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.190593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.190891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.190967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.191183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.191250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.191472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.191537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.191781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.555 [2024-10-14 13:46:28.191848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.555 qpair failed and we were unable to recover it. 00:35:36.555 [2024-10-14 13:46:28.192149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.192218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.556 [2024-10-14 13:46:28.192527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.192593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.556 [2024-10-14 13:46:28.192836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.192901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.556 [2024-10-14 13:46:28.193118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.193200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.556 [2024-10-14 13:46:28.193463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.193531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.556 [2024-10-14 13:46:28.193821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.556 [2024-10-14 13:46:28.193886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.556 qpair failed and we were unable to recover it. 00:35:36.557 [2024-10-14 13:46:28.194086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.557 [2024-10-14 13:46:28.194173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.557 qpair failed and we were unable to recover it. 00:35:36.557 [2024-10-14 13:46:28.194411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.557 [2024-10-14 13:46:28.194478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.557 qpair failed and we were unable to recover it. 00:35:36.557 [2024-10-14 13:46:28.194775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.557 [2024-10-14 13:46:28.194840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.557 qpair failed and we were unable to recover it. 00:35:36.557 [2024-10-14 13:46:28.195098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.557 [2024-10-14 13:46:28.195201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.557 qpair failed and we were unable to recover it. 00:35:36.557 [2024-10-14 13:46:28.195475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.195542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.195787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.195854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.196156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.196224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.196440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.196508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.196759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.196825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.197112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.558 [2024-10-14 13:46:28.197198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.558 qpair failed and we were unable to recover it. 00:35:36.558 [2024-10-14 13:46:28.197487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.197552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.197848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.197913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.198172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.198239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.198534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.198600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.198900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.198966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.199217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.199284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.199568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.559 [2024-10-14 13:46:28.199634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.559 qpair failed and we were unable to recover it. 00:35:36.559 [2024-10-14 13:46:28.199925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.200001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.560 [2024-10-14 13:46:28.200270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.200338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.560 [2024-10-14 13:46:28.200627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.200693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.560 [2024-10-14 13:46:28.200952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.201019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.560 [2024-10-14 13:46:28.201255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.201323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.560 [2024-10-14 13:46:28.201570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.560 [2024-10-14 13:46:28.201633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.560 qpair failed and we were unable to recover it. 00:35:36.561 [2024-10-14 13:46:28.201915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.561 [2024-10-14 13:46:28.201980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.561 qpair failed and we were unable to recover it. 00:35:36.561 [2024-10-14 13:46:28.202276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.561 [2024-10-14 13:46:28.202344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.561 qpair failed and we were unable to recover it. 00:35:36.561 [2024-10-14 13:46:28.202593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.561 [2024-10-14 13:46:28.202660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.561 qpair failed and we were unable to recover it. 00:35:36.561 [2024-10-14 13:46:28.202949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.561 [2024-10-14 13:46:28.203015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.561 qpair failed and we were unable to recover it. 00:35:36.561 [2024-10-14 13:46:28.203267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.561 [2024-10-14 13:46:28.203332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.561 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.203572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.203638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.203925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.203990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.204287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.204354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.204610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.204676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.204921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.204989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.205287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.205354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.205648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.205714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.562 qpair failed and we were unable to recover it. 00:35:36.562 [2024-10-14 13:46:28.205967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.562 [2024-10-14 13:46:28.206032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.206347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.206414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.206699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.206764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.206981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.207047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.207370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.207437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.207697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.207762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.208048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.208114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.208397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.208464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.208696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.208762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.563 [2024-10-14 13:46:28.209045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.563 [2024-10-14 13:46:28.209111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.563 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.209417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.209483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.564 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.209739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.209804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.564 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.210034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.210100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.564 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.210384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.210451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.564 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.210708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.210773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.564 qpair failed and we were unable to recover it. 00:35:36.564 [2024-10-14 13:46:28.211034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.564 [2024-10-14 13:46:28.211100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.211397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.211463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.211739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.211804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.212062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.212147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.212402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.212468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.212716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.212782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.213089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.213173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.213419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.213484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.213783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.213858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.214161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.214230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.214521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.214586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.214886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.214952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.215246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.215314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.215555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.215621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.215830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.215898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.216182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.216250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.216540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.216606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.216904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.216970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.217236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.217302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.217553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.217620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.217915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.217982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.218294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.218360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.218625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.218690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.218922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.218989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.219266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.219332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.219596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.219662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.219890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.219956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.220194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.220260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.565 [2024-10-14 13:46:28.220520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.565 [2024-10-14 13:46:28.220586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.565 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.220879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.220947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.221244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.221310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.221597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.221663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.221904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.221972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.222258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.222325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.222573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.222641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.222898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.222966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.223272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.223340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.223650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.223716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.223956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.224021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.224292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.224360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.224600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.224665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.224951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.225017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.225274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.225341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.225637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.225702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.225995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.226060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.226365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.226433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.226735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.226800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.227091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.227207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.227502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.227568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.227823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.227888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.228180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.228248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.228505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.228571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.228822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.228889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.229154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.229221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.229470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.229537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.229831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.229898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.230201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.230268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.230529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.230594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.230840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.230905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.231197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.231265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.231567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.231633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.231894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.231959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.232217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.232283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.232510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.232577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.232822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.232889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.233183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.233250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.233537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.233603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.233855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.233920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.566 [2024-10-14 13:46:28.234150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.566 [2024-10-14 13:46:28.234216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.566 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.234515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.234581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.234828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.234893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.235099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.235210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.235466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.235534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.235789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.235855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.236162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.236229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.236494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.236560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.236857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.236938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.237233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.237301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.237565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.237630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.237918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.237984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.238245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.238311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.238561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.238628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.238913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.238979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.239281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.239347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.239554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.239620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.239881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.239949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.240222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.240289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.240585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.240650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.240952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.241018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.241296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.241363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.241628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.241695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.241957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.242024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.242294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.242363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.242656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.242722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.242933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.243000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.243257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.243324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.243569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.243637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.243897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.243963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.244253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.244320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.244606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.244671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.244921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.244987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.245206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.245273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.245565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.245630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.245881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.245947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.246209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.246275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.567 qpair failed and we were unable to recover it. 00:35:36.567 [2024-10-14 13:46:28.246524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.567 [2024-10-14 13:46:28.246589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.246876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.246942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.247235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.247302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.247602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.247668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.247921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.247987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.248243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.248311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.248601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.248667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.248963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.249029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.249294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.249361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.249619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.249684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.249895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.249962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.250267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.250334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.250583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.250659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.250886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.250950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.251240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.251309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.251590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.251654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.251937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.252002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.252247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.252315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.252569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.252633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.252918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.252984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.253274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.253341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.253635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.253700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.253999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.254063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.254379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.254446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.254654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.254719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.254970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.255035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.255324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.255392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.255693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.255759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.256045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.256111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.256402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.256467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.256758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.256823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.257097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.257183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.568 qpair failed and we were unable to recover it. 00:35:36.568 [2024-10-14 13:46:28.257434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.568 [2024-10-14 13:46:28.257501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.257793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.257859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.258104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.258190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.258446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.258512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.258795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.258861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.259183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.259250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.259506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.259572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.259838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.259915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.260169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.260237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.260525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.260590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.260838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.260904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.261191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.261258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.261464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.261529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.261774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.261839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.262090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.262172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.262432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.262497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.262747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.262813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.263059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.263125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.263427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.263491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.263779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.263844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.264152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.264220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.264526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.264591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.264877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.264942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.265233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.265301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.265580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.265646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.265934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.266000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.266306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.266373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.266621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.266689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.266977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.267042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.267320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.267388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.267636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.267701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.267944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.268009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.268238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.268306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.268596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.268663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.268965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.569 [2024-10-14 13:46:28.269031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.569 qpair failed and we were unable to recover it. 00:35:36.569 [2024-10-14 13:46:28.269281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.269348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.269594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.269660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.269948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.270014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.270269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.270336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.270598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.270663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.270911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.270980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.271234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.271302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.271567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.271636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.271856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.271923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.272219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.272286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.272493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.272559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.272828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.272895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.570 [2024-10-14 13:46:28.273165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.570 [2024-10-14 13:46:28.273232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.570 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.273472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.273548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.273798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.273867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.274117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.274198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.274488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.274554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.274854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.274920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.275198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.275267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.275571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.275637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.275929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.275994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.276282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.571 [2024-10-14 13:46:28.276349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.571 qpair failed and we were unable to recover it. 00:35:36.571 [2024-10-14 13:46:28.276648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.276714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.276982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.277047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.277369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.277436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.277675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.277742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.278041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.278106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.278438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.278504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.278804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.278871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.279169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.279236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.279522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.279587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.279891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.279957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.280181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.280248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.572 qpair failed and we were unable to recover it. 00:35:36.572 [2024-10-14 13:46:28.280541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.572 [2024-10-14 13:46:28.280607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.280818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.280884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.281142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.281209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.281470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.281537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.281839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.281913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.282161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.282257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.282575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.282656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.282919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.283024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.283292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.283376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.573 qpair failed and we were unable to recover it. 00:35:36.573 [2024-10-14 13:46:28.283644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.573 [2024-10-14 13:46:28.283733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.284024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.284157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.284456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.284541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.284845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.284947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.285240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.285312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.285562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.285629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.285877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.285943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.286167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.286237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.286467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.286534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.574 [2024-10-14 13:46:28.286788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.574 [2024-10-14 13:46:28.286854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.574 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.287060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.287126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.287411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.287478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.287786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.287852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.288114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.288196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.288402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.288468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.288757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.288823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.289062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.289126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.289419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.289485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.289706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.289772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.290065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.290150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.290439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.290505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.290802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.290867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.575 [2024-10-14 13:46:28.291169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.575 [2024-10-14 13:46:28.291237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.575 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.291484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.291550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.291837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.291903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.292159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.292226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.292495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.292562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.292823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.292889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.293182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.293248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.293499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.293565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.293848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.293914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.294202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.294269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.294469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.294535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.294791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.294856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.295109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.295217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.295507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.295574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.295799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.295865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.296165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.296234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.296516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.296582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.296892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.296967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.297261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.297329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.297587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.297652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.297942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.298007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.298308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.298375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.298624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.298689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.298976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.299041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.299326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.299395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.299694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.299759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.300011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.300076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.300390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.300457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.300719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.300784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.301035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.301100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.576 [2024-10-14 13:46:28.301394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.576 [2024-10-14 13:46:28.301460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.576 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.301710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.301775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.302075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.302159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.302383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.302449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.302682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.302746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.303035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.303101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.303381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.303448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.303677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.303742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.304000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.304065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.304377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.304443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.304689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.304755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.305053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.305119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.305402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.305468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.305661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.305727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.306015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.306097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.306332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.306398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.306687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.306753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.307047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.307112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.307384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.307452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.307753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.307819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.308083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.308169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.308465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.308531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.308734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.577 [2024-10-14 13:46:28.308799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.577 qpair failed and we were unable to recover it. 00:35:36.577 [2024-10-14 13:46:28.309098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.309183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.309475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.309540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.309836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.309902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.310163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.310230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.310442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.310507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.310729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.310796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.311065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.311162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.311460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.311526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.311813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.311878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.312176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.312243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.312502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.312567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.312863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.312928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.313193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.313260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.313513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.313580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.313826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.313892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.314189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.314256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.314469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.314535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.314785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.314852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.315154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.315220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.315551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.315616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.315865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.315931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.316162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.316228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.316496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.316562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.316803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.316869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.317156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.317223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.317441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.317507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.317721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.317786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.317987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.318051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.318280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.318347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.318586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.318651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.318915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.318980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.319226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.319292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.319544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.319622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.319868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.319934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.320193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.320259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.320482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.320548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.320834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.320900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.321188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.321254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.321447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.321512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.321739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.321805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.322065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.322145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.578 qpair failed and we were unable to recover it. 00:35:36.578 [2024-10-14 13:46:28.322443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.578 [2024-10-14 13:46:28.322507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.322811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.322876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.323163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.323230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.323477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.323543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.323737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.323801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.324105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.324192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.324454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.324519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.324721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.324786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.325000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.325065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.325311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.325378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.325670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.325734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.325994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.326061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.326325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.326392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.326637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.326702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.326993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.327058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.327323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.327392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.327689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.327754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.328004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.328070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.328347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.328414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.328722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.328788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.329024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.329089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.329377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.329442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.329697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.329762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.330016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.330081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.330315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.330380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.330625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.330692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.330885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.330952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.331207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.331275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.331576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.331642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.331894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.331959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.332189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.332256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.332564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.332629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.332924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.332991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.333258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.333325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.333510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.333576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.333784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.333852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.334112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.334194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.334450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.334517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.334787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.334853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.335097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.335207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.335435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.335501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.335743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.335811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.336066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.336152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.336408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.336475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.336770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.336836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.337101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.579 [2024-10-14 13:46:28.337186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.579 qpair failed and we were unable to recover it. 00:35:36.579 [2024-10-14 13:46:28.337406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.337474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.337698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.337764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.338068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.338152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.338404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.338469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.338773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.338838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.339147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.339217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.339506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.339573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.339781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.339849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.340112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.340196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.340444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.340510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.340806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.340871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.341163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.341230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.341499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.341565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.341821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.341898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.342123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.342203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.342486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.342552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.342841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.342906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.343181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.343248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.343499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.343566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.343860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.343925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.344188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.344254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.344497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.344563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.344810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.344874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.345155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.345223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.345481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.345547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.345801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.345866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.346111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.346195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.346474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.346540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.346830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.346894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.347090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.347183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.347446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.347512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.347731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.347796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.348019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.348085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.348354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.348419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.348683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.580 [2024-10-14 13:46:28.348748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.580 qpair failed and we were unable to recover it. 00:35:36.580 [2024-10-14 13:46:28.349041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.349106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.349388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.349453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.349670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.349735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.349985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.350051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.350327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.350394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.350640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.350705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.350979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.351044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.351290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.351357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.351646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.351712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.351980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.352046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.352321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.352388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.352638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.352703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.352951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.353017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.353282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.353349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.353594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.353661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.353875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.353941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.354207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.354274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.354517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.354582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.354795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.354861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.355156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.355232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.355462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.355528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.355815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.355881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.356120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.356202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.356457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.356524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.356815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.356881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.357104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.357185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.357482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.357548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.357798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.357864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.358162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.358229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.358524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.358590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.358890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.581 [2024-10-14 13:46:28.358957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.581 qpair failed and we were unable to recover it. 00:35:36.581 [2024-10-14 13:46:28.359228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.359295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.359593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.359658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.359929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.359994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.360257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.360324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.360614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.360679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.360939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.361005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.361236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.361304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.361583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.361675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.362000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.362093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.362405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.362473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.362794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.362859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.363072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.363161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.363457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.363523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.363820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.363885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.364175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.364264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.364614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.364717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.364989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.365055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.365345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.365412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.365646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.365712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.365928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.365994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.366239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.366321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.366570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.366664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.582 [2024-10-14 13:46:28.366975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.582 [2024-10-14 13:46:28.367046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.582 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.367311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.367379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.367622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.367689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.367950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.368017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.368280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.368346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.368602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.368669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.368890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.368957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.369223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.369290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.369541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.369608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.369868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.369934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.370185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.370252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.370514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.370580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.370798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.370864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.371082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.371165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.371460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.861 [2024-10-14 13:46:28.371526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.861 qpair failed and we were unable to recover it. 00:35:36.861 [2024-10-14 13:46:28.371730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.371796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.372049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.372115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.372349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.372415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.372691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.372757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.373007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.373073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.373340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.373407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.373711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.373777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.373998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.374064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.374331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.374396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.374695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.374761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.375017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.375083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.375321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.375387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.375681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.375747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.375998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.376064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.376341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.376407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.376610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.376675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.376972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.377039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.377310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.377379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.377647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.377713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.378004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.378081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.378314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.378379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.378634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.378699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.378987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.379052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.379326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.379393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.379645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.379711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.379958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.380022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.380249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.380316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.380558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.380623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.380916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.380980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.381197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.381265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.381534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.381600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.381892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.381958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.382202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.382269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.382485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.382553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.382776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.382844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.383107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.383189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.383438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.383503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.383756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.383822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.384048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.384114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.384346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.384411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.862 [2024-10-14 13:46:28.384667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.862 [2024-10-14 13:46:28.384733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.862 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.384991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.385057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.385335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.385401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.385692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.385758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.386011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.386077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.386370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.386435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.386728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.386804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.387029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.387095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.387407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.387473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.387767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.387833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.388101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.388188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.388425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.388490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.388693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.388761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.388982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.389050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.389355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.389423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.389626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.389691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.389983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.390048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.390322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.390388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.390602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.390667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.390959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.391025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.391313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.391381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.391631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.391696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.391947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.392013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.392245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.392313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.392616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.392680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.392927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.392994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.393203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.393273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.393535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.393600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.393898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.393963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.394256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.394324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.394570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.394637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.394918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.394984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.395286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.395355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.395610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.395675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.395971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.396037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.396313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.396380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.396633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.396698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.396948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.397013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.397267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.397335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.397573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.397639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.397920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.863 [2024-10-14 13:46:28.397985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.863 qpair failed and we were unable to recover it. 00:35:36.863 [2024-10-14 13:46:28.398273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.398341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.398630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.398694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.398948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.399013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.399266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.399334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.399631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.399697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.399949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.400014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.400234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.400313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.400619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.400976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.401043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.401328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.401395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.401611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.401675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.401970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.402035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.402346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.402414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.402640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.402704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.403000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.403066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.403340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.403407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.403648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.403713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.403967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.404033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.404300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.404366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.404618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.404683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.404968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.405034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.405362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.405430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.405631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.405699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.405968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.406034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.406274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.406341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.406636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.406701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.406921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.406988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.407203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.407270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.407510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.407575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.407876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.407942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.408232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.408299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.408518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.408583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.408793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.408859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.409160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.409238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.409508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.409574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.409828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.409893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.410179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.410247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.410507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.410574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.410868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.410934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.411159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.411227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.864 qpair failed and we were unable to recover it. 00:35:36.864 [2024-10-14 13:46:28.411510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.864 [2024-10-14 13:46:28.411575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.411834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.411898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.412161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.412228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.412516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.412581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.412828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.412893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.413118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.413215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.413468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.413536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.413753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.413818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.414044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.414109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.414351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.414417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.414657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.414723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.414988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.415054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.415279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.415345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.415606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.415671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.415925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.415992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.416294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.416362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.416573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.416637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.416876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.416941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.417192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.417259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.417483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.417548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.417856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.417921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.418157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.418224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.418451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.418515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.418765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.418831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.419081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.419161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.419413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.419478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.419685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.419750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.420039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.420105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.420381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.420446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.420686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.420751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.421010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.421076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.421199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7260 (9): Bad file descriptor 00:35:36.865 [2024-10-14 13:46:28.421637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.421735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.422044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.422114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.422419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.422492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.422775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.422844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.423068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.865 [2024-10-14 13:46:28.423153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.865 qpair failed and we were unable to recover it. 00:35:36.865 [2024-10-14 13:46:28.423449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.423516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.423772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.423840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.424101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.424187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.424490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.424557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.424826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.424893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.425162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.425233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.425489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.425557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.425763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.425829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.426146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.426213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.426473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.426540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.426833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.426900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.427120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.427208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.427480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.427547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.427842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.427908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.428172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.428242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.428447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.428516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.428720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.428788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.429076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.429156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.429452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.429518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.429760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.429825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.430052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.430119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.430433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.430499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.430721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.430789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.431045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.431110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.431395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.431473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.431702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.431768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.432034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.432100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.432378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.432446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.432663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.432729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.432949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.433017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.433273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.433344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.433632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.433700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.433994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.434061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.434323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.434390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.434644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.434711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.434932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.434999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.435179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.435246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.866 [2024-10-14 13:46:28.435508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.866 [2024-10-14 13:46:28.435573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.866 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.435810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.435877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.436164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.436232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.436529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.436595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.436886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.436953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.437201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.437270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.437535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.437601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.437862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.437930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.438182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.438250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.438484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.438551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.438811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.438878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.439105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.439203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.439454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.439521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.439780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.439846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.440113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.440200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.440425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.440492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.440701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.440770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.441032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.441098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.441380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.441446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.441719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.441786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.442006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.442072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.442358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.442425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.442675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.442741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.443004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.443069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.443327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.443395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.443614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.443679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.443885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.443952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.444162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.444241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.444473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.444538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.444831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.444898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.445186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.445254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.445517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.445583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.445870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.445934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.446193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.446260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.446524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.446592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.446807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.446874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.447086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.447168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.447399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.447468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.447763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.447830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.448092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.448194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.448448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.867 [2024-10-14 13:46:28.448513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.867 qpair failed and we were unable to recover it. 00:35:36.867 [2024-10-14 13:46:28.448760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.448826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.449075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.449161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.449432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.449497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.449754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.449820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.450065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.450163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.450354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.450426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.450689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.450757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.450987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.451054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.451330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.451408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.451653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.451719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.451937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.452002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.452291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.452359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.452576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.452643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.452863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.452931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.453228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.453297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.453597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.453663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.453863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.453932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.454168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.454236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.454470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.454535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.454827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.454893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.455163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.455230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.455475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.455540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.455794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.455860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.456144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.456210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.456461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.456526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.456775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.456840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.457097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.457193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.457423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.457490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.457730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.457796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.458041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.458109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.458382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.458450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.458706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.458773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.459045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.459109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.459351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.459428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.459683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.459748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.459998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.460065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.461543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.461607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.461875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.461932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.462208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.462290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.462534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.868 [2024-10-14 13:46:28.462615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.868 qpair failed and we were unable to recover it. 00:35:36.868 [2024-10-14 13:46:28.462896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.462950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.463126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.463201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.463452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.463525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.463754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.463808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.463987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.464040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.464298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.464371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.464606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.464661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.464843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.464897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.465109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.465177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.465380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.465433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.465671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.465743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.465944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.465998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.466275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.466346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.466579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.466662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.466836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.466892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.467100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.467183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.467456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.467514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.467699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.467776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.467972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.468026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.468228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.468305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.468477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.468530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.468732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.468786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.468967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.469023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.469235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.469289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.469502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.469556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.469756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.469809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.470052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.470105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.470342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.470396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.470651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.470724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.470939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.470992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.471215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.471294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.471491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.471565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.471847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.471920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.472138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.472204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.472429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.472501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.472799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.472869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.473047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.473096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.473343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.473411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.473662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.473732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.473934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.869 [2024-10-14 13:46:28.473985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.869 qpair failed and we were unable to recover it. 00:35:36.869 [2024-10-14 13:46:28.474169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.474222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.474484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.474553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.474809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.474859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.475099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.475168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.475434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.475503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.475749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.475817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.476054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.476103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.476349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.476429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.476711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.476780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.476981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.477033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.477331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.477413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.477643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.477716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.477887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.477939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.478157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.478223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.478455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.478524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.478814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.478865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.479075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.479126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.479340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.479418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.479626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.479705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.479887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.479940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.480191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.480245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.480391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.480444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.480636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.480700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.480974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.481054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.481313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.481370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.481564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.481618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.481814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.481867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.482097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.482181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.482365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.482436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.482742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.482808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.483028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.483095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.483343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.483396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.483658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.483724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.483937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.484014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.484239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.484293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.870 [2024-10-14 13:46:28.484538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.870 [2024-10-14 13:46:28.484591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.870 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.484909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.484975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.485245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.485299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.485471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.485534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.485801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.485867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.486112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.486224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.486433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.486486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.486741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.486807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.487056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.487111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.487305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.487358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.487587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.487655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.487909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.487975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.488247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.488300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.488551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.488616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.488856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.488922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.489199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.489252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.489425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.489478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.489652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.489732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.490051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.490117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.490400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.490473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.490761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.490826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.491154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.491234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.491473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.491535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.491829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.491894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.492233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.492286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.492516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.492589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.492850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.492914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.493157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.493222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.493442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.493509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.493797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.493862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.494198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.494251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.494417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.494471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.494687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.494764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.495116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.495203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.495501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.495566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.495829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.495895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.496194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.496260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.496557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.496622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.496929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.496995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.497260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.497327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.497571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.871 [2024-10-14 13:46:28.497640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.871 qpair failed and we were unable to recover it. 00:35:36.871 [2024-10-14 13:46:28.497901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.497967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.498238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.498304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.498598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.498664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.498968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.499035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.499301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.499369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.499698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.499765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.500016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.500081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.500362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.500429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.500728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.500794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.501087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.501174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.501465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.501531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.501833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.501899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.502191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.502258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.502554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.502621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.502876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.502943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.503235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.503302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.503564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.503630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.503878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.503947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.504174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.504241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.504502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.504568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.504854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.504920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.505220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.505287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.505543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.505609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.505896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.505962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.506181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.506250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.506535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.506602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.506860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.506925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.507179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.507247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.507496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.507564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.507864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.507930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.508196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.508262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.508562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.508628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.508920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.508995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.509253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.509320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.509594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.509660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.509949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.510014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.510309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.510377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.510632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.510697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.510949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.511014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.511302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.511368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.872 [2024-10-14 13:46:28.511629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.872 [2024-10-14 13:46:28.511694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.872 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.511984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.512050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.512325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.512390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.512607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.512675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.512917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.512984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.513274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.513342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.513601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.513669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.513931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.513997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.514290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.514356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.514651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.514717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.514970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.515036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.515312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.515379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.515644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.515710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.516012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.516079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.516336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.516402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.516660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.516727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.516934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.517001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.517229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.517296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.517586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.517652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.517912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.517988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.518233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.518300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.518558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.518624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.518836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.518901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.519171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.519238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.519448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.519514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.519762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.519827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.520142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.520210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.520428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.520494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.520692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.520759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.521008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.521075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.521355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.521422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.521682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.521747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.522006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.522071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.522390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.522460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.522756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.522821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.523107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.523193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.523489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.523556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.523806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.523871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.524164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.524231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.524527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.524593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.524908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.524973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.873 [2024-10-14 13:46:28.525236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.873 [2024-10-14 13:46:28.525304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.873 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.525555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.525622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.525927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.525994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.526280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.526347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.526649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.526715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.526965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.527034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.527356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.527424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.527670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.527737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.527989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.528058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.528294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.528360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.528620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.528688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.528982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.529049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.529320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.529387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.529682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.529747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.529962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.530028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.530306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.530373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.530669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.530735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.531015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.531079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.531402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.531469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.531715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.531791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.532047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.532112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.532426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.532492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.532792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.532858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.533162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.533229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.533447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.533515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.533752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.533818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.534082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.534178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.534446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.534513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.534810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.534876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.535147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.535213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.535501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.535568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.535793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.535859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.536120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.536200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.536507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.536573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.536830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.536898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.537178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.537245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.537543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.874 [2024-10-14 13:46:28.537609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.874 qpair failed and we were unable to recover it. 00:35:36.874 [2024-10-14 13:46:28.537863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.537928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.538180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.538247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.538532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.538599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.538860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.538927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.539176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.539245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.539507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.539573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.539842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.539909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.540172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.540239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.540483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.540549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.540796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.540862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.541126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.541220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.541474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.541542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.541829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.541896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.542193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.542261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.542525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.542591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.542838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.542906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.543168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.543236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.543532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.543598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.543843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.543909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.544194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.544262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.544561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.544627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.544867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.544934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.545195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.545262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.545478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.545546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.545843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.545909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.546197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.546264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.546516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.546582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.546866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.546932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.547215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.547282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.547531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.547597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.547868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.547934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.548233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.548300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.548563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.548630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.548888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.548955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.549208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.549275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.549543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.549609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.549863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.549930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.550231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.550299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.550522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.550587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.550870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.550936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.875 qpair failed and we were unable to recover it. 00:35:36.875 [2024-10-14 13:46:28.551157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.875 [2024-10-14 13:46:28.551224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.551467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.551533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.551775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.551842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.552088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.552167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.552408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.552474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.552720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.552786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.552997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.553062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.553360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.553427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.553673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.553739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.553987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.554055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.554363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.554440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.554734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.554801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.555094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.555181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.555482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.555549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.555843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.555908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.556162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.556231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.556487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.556553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.556842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.556907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.557204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.557272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.557493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.557559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.557856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.557921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.558180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.558247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.558542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.558608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.558827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.558892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.559185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.559253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.559541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.559608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.559896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.559962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.560258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.560326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.560545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.560611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.560896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.560961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.561217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.561286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.561557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.561624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.561935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.562001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.562222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.562290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.562536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.562602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.562872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.562938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.563241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.563309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.563585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.563650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.563876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.563942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.564145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.564212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.564414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.876 [2024-10-14 13:46:28.564480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.876 qpair failed and we were unable to recover it. 00:35:36.876 [2024-10-14 13:46:28.564729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.564795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.565050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.565116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.565394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.565460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.565717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.565783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.566032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.566099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.566390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.566456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.566651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.566718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.566929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.566995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.567213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.567280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.567534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.567600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.567898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.567966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.568199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.568265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.568518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.568584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.568827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.568892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.569119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.569199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.569494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.569560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.569838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.569904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.570102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.570182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.570413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.570478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.570743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.570809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.571104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.571190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.571410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.571477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.571777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.571843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.572158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.572226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.572483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.572549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.572802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.572870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.573163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.573231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.573447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.573516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.573764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.573830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.574089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.574171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.574413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.574480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.574694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.574761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.575019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.575086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.575346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.575414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.575666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.575733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.576035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.576101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.576378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.576444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.576661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.576736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.576997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.577063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.577296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.577362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.577648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.877 [2024-10-14 13:46:28.577714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.877 qpair failed and we were unable to recover it. 00:35:36.877 [2024-10-14 13:46:28.577985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.578050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.578258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.578324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.578615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.578680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.578924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.578990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.579267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.579335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.579638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.579704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.579915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.579982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.580173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.580240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.580492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.580558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.580801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.580866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.581098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.581184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.581444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.581510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.581731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.581796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.582082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.582180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.582442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.582508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.582739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.582805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.583024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.583089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.583371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.583438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.583700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.583765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.584007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.584072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.584317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.584383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.584613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.584679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.584934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.584998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.585244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.585313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.585585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.585652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.585893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.585958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.586236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.586303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.586548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.586614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.586911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.586976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.587214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.587281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.587495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.587561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.587779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.587844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.588078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.588158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.588391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.588457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.588669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.588734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.588980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.589046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.589321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.589387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.878 qpair failed and we were unable to recover it. 00:35:36.878 [2024-10-14 13:46:28.589646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.878 [2024-10-14 13:46:28.589721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.589917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.589982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.590238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.590305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.590545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.590610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.590827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.590893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.591185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.591253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.591505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.591569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.591868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.591933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.592182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.592249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.592465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.592530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.592802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.592868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.593119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.593200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.593457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.593522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.593777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.593842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.594106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.594194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.594403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.594470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.594724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.594790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.595037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.595103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.595390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.595457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.595712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.595778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.596067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.596154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.596400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.596467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.596694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.596759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.596990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.597056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.597320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.597387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.597608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.597674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.597867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.597934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.598152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.598230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.598492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.598557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.598750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.598817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.599073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.599157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.599410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.599475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.599714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.599780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.600011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.600075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.600303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.600370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.600592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.600658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.600878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.600944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.601232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.601299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.601557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.601623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.879 qpair failed and we were unable to recover it. 00:35:36.879 [2024-10-14 13:46:28.601906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.879 [2024-10-14 13:46:28.601971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.602214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.602281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.602541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.602607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.602868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.602933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.603200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.603268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.603485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.603551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.603802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.603869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.604081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.604162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.604359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.604425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.604674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.604741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.605044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.605110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.605345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.605411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.605655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.605723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.605945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.606013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.606296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.606362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.606624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.606690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.606990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.607056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.607270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.607338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.607595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.607661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.607912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.607978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.608273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.608340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.608573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.608638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.608927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.608993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.609256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.609323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.609577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.609643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.609901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.609967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.610216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.610282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.610506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.610573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.610787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.610853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.611093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.611193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.611415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.611484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.611727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.611795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.611994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.612059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.612290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.612357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.612561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.612628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.612873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.612941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.613203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.613270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.613562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.613627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.613894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.613959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.614184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.614251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.614452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.614517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.880 qpair failed and we were unable to recover it. 00:35:36.880 [2024-10-14 13:46:28.614733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.880 [2024-10-14 13:46:28.614799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.615088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.615168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.615472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.615538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.615797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.615862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.616107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.616191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.616451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.616517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.616803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.616869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.617104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.617191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.617426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.617492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.617718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.617783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.618000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.618068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.618341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.618407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.618655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.618723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.619022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.619088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.619377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.619443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.619686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.619758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.620044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.620110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.620383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.620449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.620657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.620721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.620929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.620995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.621252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.621319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.621602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.621668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.621894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.621959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.622219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.622285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.622535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.622600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.622840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.622905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.623151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.623218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.623457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.623523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.623726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.623791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.624091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.624208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.624462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.624535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.624840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.624907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.625107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.625189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.625410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.625480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.625709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.625774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.626036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.626102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.626340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.626406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.626709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.626774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.627063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.627144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.627369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.627436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.627686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.881 [2024-10-14 13:46:28.627750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.881 qpair failed and we were unable to recover it. 00:35:36.881 [2024-10-14 13:46:28.628008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.628074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.628309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.628399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.628624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.628689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.628932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.628998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.629205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.629273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.629501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.629567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.629778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.629845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.630077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.630157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.630403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.630470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.630715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.630780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.631069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.631152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.631348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.631414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.631666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.631733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.631946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.632012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.632376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.632449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.632757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.632824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.633096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.633172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.633442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.633508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.633803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.633869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.634114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.634194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.634486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.634552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.634844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.634907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.635174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.635237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.635535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.635597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.635898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.635959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.636213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.636276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.636517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.636579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.636872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.636933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.637208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.637274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.637563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.637626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.637916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.637977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.638257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.638320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.638576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.638638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.638899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.638962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.639219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.639285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.639594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.639659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.639956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.640018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.640249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.640313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.640554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.640617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.640872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.882 [2024-10-14 13:46:28.640937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.882 qpair failed and we were unable to recover it. 00:35:36.882 [2024-10-14 13:46:28.641198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.641263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.641502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.641576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.641855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.641917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.642192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.642259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.642475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.642540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.642761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.642825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.643086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.643189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.643470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.643537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.643741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.643807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.644047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.644113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.644419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.644485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.644696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.644763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.644961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.645027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.645241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.645308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.645513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.645578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.645838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.645906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.646201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.646269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.646512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.646577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.646789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.646854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.647095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.647173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.647439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.647505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.647745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.647811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.648028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.648095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.648361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.648426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.648706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.648772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.649029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.649096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.649413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.649479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.649766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.649833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.650143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.650212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.650419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.650485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.650734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.650802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.651051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.651116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.651439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.651504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.651802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.651867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.652120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.883 [2024-10-14 13:46:28.652202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.883 qpair failed and we were unable to recover it. 00:35:36.883 [2024-10-14 13:46:28.652412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.652478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.652752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.652818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.653113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.653190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.653487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.653553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.653764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.653830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.654119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.654197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.654490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.654569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.654869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.654935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.655186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.655252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.655501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.655567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.655785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.655853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.656152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.656218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.656520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.656585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.656868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.656933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.657197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.657263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.657566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.657631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.657925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.657992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.658248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.658314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.658611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.658676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.658929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.658994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.659298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.659365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.659661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.659728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.659976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.660041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.660311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.660377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.660628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.660694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.660995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.661060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.661341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.661407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.661691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.661758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.662001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.662066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.662331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.662397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.662678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.662743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.663045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.663109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.663378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.663442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.663735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.663802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.664058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.664124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.664403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.664468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.664712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.664777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.665019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.665084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.665353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.665421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.665722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.884 [2024-10-14 13:46:28.665797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.884 qpair failed and we were unable to recover it. 00:35:36.884 [2024-10-14 13:46:28.666081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.666158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.666459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.666524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.666780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.666844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.667187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.667254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.667557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.667623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.667832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.667896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.668161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.668238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.668489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.668555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.668825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.668890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.669158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.669237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.669505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.669573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.669830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.669895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.670126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.670205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.670466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.670532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.670797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.670862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.671107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.671186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.671489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.671555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.671850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.671914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.672203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.672270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.672572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.672638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.672898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.672962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.673177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.673243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.673534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.673598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.673868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.673934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.674159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.674238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.674524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.674590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.674838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.674903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.675206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.675273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.675526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.675590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.675880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.675945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.676165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.676232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.676516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.676580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.676867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.676932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.677208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.677275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.677572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.677637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.677853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.677915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.678107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.678187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.678405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.678478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.678725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.678792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.679089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.679169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.885 qpair failed and we were unable to recover it. 00:35:36.885 [2024-10-14 13:46:28.679472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.885 [2024-10-14 13:46:28.679538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.679831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.679897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.680115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.680192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.680482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.680546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.680834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.680899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.681205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.681271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.681505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.681569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.681879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.681945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.682196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.682263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.682467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.682535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.682831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.682897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.683179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.683268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.683528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.683594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.683801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.683875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.684119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.684198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.684440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.684505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.684791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.684856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.685155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.685222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.685473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.685538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.685801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.685867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.686147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.686212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.686483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.686548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.686844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.686908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.687168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.687236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.687525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.687590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.687836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.687902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.688161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.688227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.688419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.688484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.688772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.688836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.689123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.689200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.689493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.689558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.689849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.689914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.690175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.690243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.690502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.690577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.690821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.690887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.691196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.691264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.691526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.691593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.691832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.691901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.692230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.692300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.692583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.692649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.886 [2024-10-14 13:46:28.692900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.886 [2024-10-14 13:46:28.692965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.886 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.693216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.693282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.693479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.693571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.693810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.693875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.694120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.694199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.694421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.694486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.694733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.694820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.695100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.695184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.695444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.695509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.695784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.695848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.696096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.696171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.696415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.696482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.696682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.696748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.696998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.697064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.697355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.697443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.697726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.697792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:36.887 [2024-10-14 13:46:28.698040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.887 [2024-10-14 13:46:28.698104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:36.887 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.698348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.698412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.698621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.698686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.698940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.699007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.699317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.699383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.699605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.699670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.699990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.700094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.700384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.700454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.700774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.700842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.701110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.701208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.701453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.701530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.701768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.701836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.702170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.702239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.702512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.702580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.702869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.702935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.703184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.703254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.703498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.703599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.703852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.703936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.704215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.704284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.704486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.704552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.704748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.704815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.705070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.705150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.705449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.705513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.705808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.705873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.706123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.706200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.706462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.174 [2024-10-14 13:46:28.706528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.174 qpair failed and we were unable to recover it. 00:35:37.174 [2024-10-14 13:46:28.706819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.706885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.707183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.707251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.707497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.707564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.707824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.707889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.708179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.708245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.708502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.708568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.708850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.708915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.709220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.709286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.709582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.709646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.709911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.709974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.710232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.710298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.710555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.710620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.710879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.710942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.711241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.711306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.711559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.711626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.711918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.711983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.712279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.712345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.712581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.712645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.712963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.713028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.713290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.713355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.713650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.713713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.714012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.714076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.714345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.714411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.714665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.714729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.714958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.715023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.715315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.715383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.715613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.715678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.715887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.715950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.716246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.716312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.716617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.716682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.716927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.716991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.717288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.717365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.717616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.717683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.717980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.718044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.718267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.718334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.718577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.718644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.718945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.719009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.719271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.719339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.719594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.175 [2024-10-14 13:46:28.719660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.175 qpair failed and we were unable to recover it. 00:35:37.175 [2024-10-14 13:46:28.719917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.719982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.720281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.720347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.720639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.720702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.720997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.721062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.721333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.721397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.721659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.721723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.721966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.722031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.722304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.722372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.722583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.722649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.722921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.722986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.723239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.723305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.723559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.723624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.723869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.723932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.724217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.724282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.724500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.724565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.724764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.724829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.725067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.725144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.725449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.725514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.725809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.725874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.726181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.726249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.726476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.726542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.726755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.726822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.727080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.727158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.727465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.727530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.727751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.727815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.728071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.728153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.728429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.728494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.728748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.728813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.729074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.729152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.729411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.729476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.729692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.729760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.730022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.730087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.730390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.730465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.730766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.730831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.731096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.731192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.731437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.731503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.731790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.731855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.732115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.732208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.732461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.732526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.732819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.176 [2024-10-14 13:46:28.732883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.176 qpair failed and we were unable to recover it. 00:35:37.176 [2024-10-14 13:46:28.733168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.733235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.733521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.733586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.733870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.733934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.734231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.734297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.734553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.734621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.734904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.734968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.735275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.735342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.735634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.735700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.735993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.736057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.736309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.736377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.736642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.736707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.737009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.737074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.737385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.737451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.737746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.737812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.738059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.738124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.738439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.738505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.738762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.738827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.739039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.739105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.739389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.739455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.739726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.739793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.740041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.740108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.740488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.740554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.740806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.740872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.741065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.741142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.741440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.741505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.741750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.741813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.742085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.742164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.742417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.742484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.742773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.742837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.743099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.743256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.743553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.743619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.743882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.743945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.744276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.744357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.744648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.744714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.744972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.745037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.745322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.745388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.745631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.745701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.745993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.746059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.746336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.177 [2024-10-14 13:46:28.746402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.177 qpair failed and we were unable to recover it. 00:35:37.177 [2024-10-14 13:46:28.746644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.746714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.746998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.747061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.747392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.747459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.747687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.747753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.747988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.748052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.748311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.748378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.748633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.748699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.748967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.749032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.749339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.749405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.749619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.749684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.749971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.750037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.750341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.750407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.750634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.750698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.750948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.751016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.751276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.751342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.751550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.751616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.751874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.751941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.752189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.752258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.752508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.752573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.752856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.752921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.753230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.753296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.753563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.753627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.753880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.753947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.754245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.754311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.754597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.754661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.754969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.755034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.755325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.755391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.755673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.755738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.755995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.756060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.756316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.756381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.756613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.756679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.756967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.757032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.757305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.757370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.178 qpair failed and we were unable to recover it. 00:35:37.178 [2024-10-14 13:46:28.757635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.178 [2024-10-14 13:46:28.757711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.758011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.758077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.758302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.758366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.758650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.758715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.759016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.759083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.759357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.759422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.759684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.759749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.760004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.760069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.760377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.760442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.760742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.760807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.761052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.761118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.761382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.761449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.761659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.761725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.762005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.762070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.762313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.762379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.762661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.762725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.762973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.763039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.763368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.763434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.763696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.763760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.764050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.764115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.764423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.764489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.764751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.764817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.765062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.765143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.765402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.765467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.765709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.765777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.766061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.766126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.766396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.766460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.766712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.766778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.767082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.767159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.767396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.767461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.767722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.767786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.767988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.768054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.768344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.768412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.768708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.768773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.769050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.769117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.769417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.769482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.769786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.769851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.770101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.770183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.770393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.770457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.179 [2024-10-14 13:46:28.770665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.179 [2024-10-14 13:46:28.770730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.179 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.770922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.771000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.771230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.771297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.771550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.771615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.771877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.771942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.772205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.772270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.772528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.772593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.772887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.772952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.773208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.773274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.773567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.773641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.773944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.774016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.774287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.774352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.774632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.774697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.774986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.775051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.775320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.775386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.775668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.775733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.775983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.776047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.776341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.776408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.776670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.776736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.777016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.777081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.777359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.777425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.777663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.777729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.777978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.778043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.778337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.778404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.778700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.778766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.779019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.779083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.779354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.779421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.779714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.779780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.780047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.780113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.780421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.780487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.780777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.780842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.781072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.781151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.781445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.781511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.781806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.781872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.782070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.782151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.782418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.782483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.782746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.782812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.783033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.783097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.783361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.783429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.783725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.783792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.180 [2024-10-14 13:46:28.783987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.180 [2024-10-14 13:46:28.784051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.180 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.784322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.784399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.784611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.784677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.784944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.785009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.785292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.785359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.785570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.785634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.785875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.785942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.786181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.786248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.786548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.786614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.786911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.786976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.787248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.787315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.787602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.787667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.787905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.787970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.788230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.788298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.788561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.788627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.788853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.788919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.789216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.789282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.789501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.789566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.789806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.789872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.790122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.790199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.790460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.790525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.790823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.790888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.791153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.791219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.791505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.791571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.791871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.791947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.792202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.792269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.792563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.792639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.792899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.792966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.793276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.793343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.793570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.793636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.793924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.793990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.794280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.794346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.794601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.794666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.794902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.794967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.795279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.795345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.795582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.795648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.795893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.795960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.796263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.796330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.796547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.796613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.796884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.796948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.797192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.181 [2024-10-14 13:46:28.797259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.181 qpair failed and we were unable to recover it. 00:35:37.181 [2024-10-14 13:46:28.797479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.797556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.797826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.797891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.798178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.798244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.798452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.798517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.798765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.798829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.799108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.799188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.799477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.799543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.799785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.799849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.800092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.800188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.800445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.800511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.800739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.800803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.801057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.801122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.801392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.801458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.801745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.801809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 400311 Killed "${NVMF_APP[@]}" "$@" 00:35:37.182 [2024-10-14 13:46:28.802063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.802144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.802402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.802467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:37.182 [2024-10-14 13:46:28.802679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.802746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.802999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:37.182 [2024-10-14 13:46:28.803064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.803330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:37.182 [2024-10-14 13:46:28.803396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.803581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.803646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.803901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.182 [2024-10-14 13:46:28.803966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.804216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.804282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.804535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.804599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.804858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.804923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.805145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.805221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.805441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.805508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.805801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.805867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.806190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.806337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.806521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.806694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.806838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.806979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.807167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.807345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.807525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.807666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.182 [2024-10-14 13:46:28.807814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.182 [2024-10-14 13:46:28.807849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.182 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.807958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.807992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.808110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.808162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.808302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.808337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.808462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.808497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.808617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.808651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.808767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.808841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.809063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.809097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.809207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.809242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.809340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.809374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.809487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=400850 00:35:37.183 [2024-10-14 13:46:28.809522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:37.183 [2024-10-14 13:46:28.809666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 400850 00:35:37.183 [2024-10-14 13:46:28.809700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.809838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.809872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 400850 ']' 00:35:37.183 [2024-10-14 13:46:28.809989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.183 [2024-10-14 13:46:28.810151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.183 [2024-10-14 13:46:28.810329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.183 [2024-10-14 13:46:28.810475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.183 [2024-10-14 13:46:28.810619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 13:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.183 [2024-10-14 13:46:28.810766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.810948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.810981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.811925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.811960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.812101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.812169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.812318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.812351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.812450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.812484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.812593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.183 [2024-10-14 13:46:28.812627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.183 qpair failed and we were unable to recover it. 00:35:37.183 [2024-10-14 13:46:28.812759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.812792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.812909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.812942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.813922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.813956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.814936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.814969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.815875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.815982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.816894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.816997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.817925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.817957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.818155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.818296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.818461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.818639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.184 [2024-10-14 13:46:28.818772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.184 qpair failed and we were unable to recover it. 00:35:37.184 [2024-10-14 13:46:28.818881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.818913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.819945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.819990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.820870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.820901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.821817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.821849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.822951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.822982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.823923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.823953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.824092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.824121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.824227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.824257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.824354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.824385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.824479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.185 [2024-10-14 13:46:28.824510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.185 qpair failed and we were unable to recover it. 00:35:37.185 [2024-10-14 13:46:28.824612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.824642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.824765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.824795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.824922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.824951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.825921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.825950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.826915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.826943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.827900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.827991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.828913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.828944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.829896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.829925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.830049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.186 [2024-10-14 13:46:28.830078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.186 qpair failed and we were unable to recover it. 00:35:37.186 [2024-10-14 13:46:28.830184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.830877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.830992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.831853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.831881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.832865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.832894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.833927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.833968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.834073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.834102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.834213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.834242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.834321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.834348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.834427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.187 [2024-10-14 13:46:28.834453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.187 qpair failed and we were unable to recover it. 00:35:37.187 [2024-10-14 13:46:28.834543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.834571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.834662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.834709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.834809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.834837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.834957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.834985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.835891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.835978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.836863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.836889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.837899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.837939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.838961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.838989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.839069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.839096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.839219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.839247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.839342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.839369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.839449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.839476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.188 qpair failed and we were unable to recover it. 00:35:37.188 [2024-10-14 13:46:28.839558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.188 [2024-10-14 13:46:28.839584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.839670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.839697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.839793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.839820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.839932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.839958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.840964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.840991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.841916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.841943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.842967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.842993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.843898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.843927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.844036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.844063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.844142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.189 [2024-10-14 13:46:28.844169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.189 qpair failed and we were unable to recover it. 00:35:37.189 [2024-10-14 13:46:28.844267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.844380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.844492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.844638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.844755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.844866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.844893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.845913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.845941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.846950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.846978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.847942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.847971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.848959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.848989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.849073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.849101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.849195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.190 [2024-10-14 13:46:28.849222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.190 qpair failed and we were unable to recover it. 00:35:37.190 [2024-10-14 13:46:28.849306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.849421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.849532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.849655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.849772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.849886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.849913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.850964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.850992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.851918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.851945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.852956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.852996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.191 qpair failed and we were unable to recover it. 00:35:37.191 [2024-10-14 13:46:28.853723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.191 [2024-10-14 13:46:28.853751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.853878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.853909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.854893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.854920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.855894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.855996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.856930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.856958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.857951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.857991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.192 [2024-10-14 13:46:28.858877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.192 qpair failed and we were unable to recover it. 00:35:37.192 [2024-10-14 13:46:28.858970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.858997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.859856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.859902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.860914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.860943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.861945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.861973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.862907] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:37.193 [2024-10-14 13:46:28.862965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.862971] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.193 [2024-10-14 13:46:28.862991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.193 [2024-10-14 13:46:28.863784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.193 qpair failed and we were unable to recover it. 00:35:37.193 [2024-10-14 13:46:28.863863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.863891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.863988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.864909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.864935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.865869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.865895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.866968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.866995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.867873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.867990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.194 [2024-10-14 13:46:28.868823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.194 qpair failed and we were unable to recover it. 00:35:37.194 [2024-10-14 13:46:28.868945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.868973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.869936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.869963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.870883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.870911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.871859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.871981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.872901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.872928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.195 qpair failed and we were unable to recover it. 00:35:37.195 [2024-10-14 13:46:28.873957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.195 [2024-10-14 13:46:28.873984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.874836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.874988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.875881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.875977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.876928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.876955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.877864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.877891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.878126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.878282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.878434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.878611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.196 [2024-10-14 13:46:28.878726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.196 qpair failed and we were unable to recover it. 00:35:37.196 [2024-10-14 13:46:28.878846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.878872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.878963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.878993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.879878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.879992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.880932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.880959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.881896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.881925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.882851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.882977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.883003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.883091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.883118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.197 [2024-10-14 13:46:28.883251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.197 [2024-10-14 13:46:28.883284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.197 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.883376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.883403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.883543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.883581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.883693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.883719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.883837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.883864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.883971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.883997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.884890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.884975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.885909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.885935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.886966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.886993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.887861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.887983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.888012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.888139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.198 [2024-10-14 13:46:28.888167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.198 qpair failed and we were unable to recover it. 00:35:37.198 [2024-10-14 13:46:28.888286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.888313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.888394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.888430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.888548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.888575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.888666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.888694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.888821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.888849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.888974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.889965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.889991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.890955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.890981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.891934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.891960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.892049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.892077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.892175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.892202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.892318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.892344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.892454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.892480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.199 qpair failed and we were unable to recover it. 00:35:37.199 [2024-10-14 13:46:28.892594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.199 [2024-10-14 13:46:28.892621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.892701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.892727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.892820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.892847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.892935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.892961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.893963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.893991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.894880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.894998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.895889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.895981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.896857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.896884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.897060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.897239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.897385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.897506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.200 [2024-10-14 13:46:28.897663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.200 qpair failed and we were unable to recover it. 00:35:37.200 [2024-10-14 13:46:28.897783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.897810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.897903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.897930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.898876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.898984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.899875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.899903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.900855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.900881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.901006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.901047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.901194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.901224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.901311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.901338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.901418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.201 [2024-10-14 13:46:28.901445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.201 qpair failed and we were unable to recover it. 00:35:37.201 [2024-10-14 13:46:28.901559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.901587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.901704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.901735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.901850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.901878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.901995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.902919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.902946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.903872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.903898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.904898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.904985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.905909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.905936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.202 [2024-10-14 13:46:28.906074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.202 [2024-10-14 13:46:28.906101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.202 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.906966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.906994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.907927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.907967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.908885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.908926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.909965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.909992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.910859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.910996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.911035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.911149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.203 [2024-10-14 13:46:28.911190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.203 qpair failed and we were unable to recover it. 00:35:37.203 [2024-10-14 13:46:28.911284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.911313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.911426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.911453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.911573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.911602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.911716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.911744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.911858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.911884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.912939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.912969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.913877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.913975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.914895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.914921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.915007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.915034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.915114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.915152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.204 [2024-10-14 13:46:28.915276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.204 [2024-10-14 13:46:28.915315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.204 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.915438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.915466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.915586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.915614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.915702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.915730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.915840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.915867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.915951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.915979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.916860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.916886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.917879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.917906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.918880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.918906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.919853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.919976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.205 qpair failed and we were unable to recover it. 00:35:37.205 [2024-10-14 13:46:28.920866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.205 [2024-10-14 13:46:28.920893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.921968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.921995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.922902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.922986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.923914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.923947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.924089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.924228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.924368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.924514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.206 [2024-10-14 13:46:28.924663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.206 qpair failed and we were unable to recover it. 00:35:37.206 [2024-10-14 13:46:28.924779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.924807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.924899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.924928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.925852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.925879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.926870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.926991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.927938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.927977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.928921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.928948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.929962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.929990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.930124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.930171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.930292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.930321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.930452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.930479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.207 qpair failed and we were unable to recover it. 00:35:37.207 [2024-10-14 13:46:28.930566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.207 [2024-10-14 13:46:28.930601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.930716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.930745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.930864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.930892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.930976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.931921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.931949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.932893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.932986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.933881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.933910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.934919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.934946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.935957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.935985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.936096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.208 [2024-10-14 13:46:28.936137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.208 qpair failed and we were unable to recover it. 00:35:37.208 [2024-10-14 13:46:28.936216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.936365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.936493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.936611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.936699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.209 [2024-10-14 13:46:28.936772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.936948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.936987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.937946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.937986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.938891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.938919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.939941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.939981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.940837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.940985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.941016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.941104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.941144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.941268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.941295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.941408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.941447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.209 qpair failed and we were unable to recover it. 00:35:37.209 [2024-10-14 13:46:28.941560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.209 [2024-10-14 13:46:28.941586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.941733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.941761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.941848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.941876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.941973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.942942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.942969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.943904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.943931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.944880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.944974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.945909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.945936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.210 qpair failed and we were unable to recover it. 00:35:37.210 [2024-10-14 13:46:28.946752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.210 [2024-10-14 13:46:28.946779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.946871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.946905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.946999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.947871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.947899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.948890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.948929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.949880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.949907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.950899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.950926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.951042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.951070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.951199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.211 [2024-10-14 13:46:28.951226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.211 qpair failed and we were unable to recover it. 00:35:37.211 [2024-10-14 13:46:28.951307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.951334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.951415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.951442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.951529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.951556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.951673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.951699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.951846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.951874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.952910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.952936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.953830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.953990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.954907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.954934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.955860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.955973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.212 [2024-10-14 13:46:28.956843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.212 qpair failed and we were unable to recover it. 00:35:37.212 [2024-10-14 13:46:28.956972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.957925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.957953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.958914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.958941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.959889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.959917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.960883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.960912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.961940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.961968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.962056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.962084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.962186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.962214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.213 qpair failed and we were unable to recover it. 00:35:37.213 [2024-10-14 13:46:28.962300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.213 [2024-10-14 13:46:28.962327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.962454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.962481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.962599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.962626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.962768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.962796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.962906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.962933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.963861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.963993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.964905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.964995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.965888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.965980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.966871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.966985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.214 [2024-10-14 13:46:28.967809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.214 qpair failed and we were unable to recover it. 00:35:37.214 [2024-10-14 13:46:28.967921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.967948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.968910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.968938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.969884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.969917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.970890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.970918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.971915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.971942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.972017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.972044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.972158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.215 [2024-10-14 13:46:28.972185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.215 qpair failed and we were unable to recover it. 00:35:37.215 [2024-10-14 13:46:28.972275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.972302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.972386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.972413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.972529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.972555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.972672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.972699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.972815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.972841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.972963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.973936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.973963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.974873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.974985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.975873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.975984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.976851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.976996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.977109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.977283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.977455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.977606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.216 [2024-10-14 13:46:28.977719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.216 [2024-10-14 13:46:28.977746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.216 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.977837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.977865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.977950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.977977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.978855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.978883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.979887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.979974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.980900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.980929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.981944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.981972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.982852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.982885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.983009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.983036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.983145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.983186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.983285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.217 [2024-10-14 13:46:28.983313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.217 qpair failed and we were unable to recover it. 00:35:37.217 [2024-10-14 13:46:28.983442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.983468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.983556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.983582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.983662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.983687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.983796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.983823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.983912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.983941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.984906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.984984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.218 [2024-10-14 13:46:28.985555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.218 [2024-10-14 13:46:28.985579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.218 [2024-10-14 13:46:28.985580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 [2024-10-14 13:46:28.985591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.218 [2024-10-14 13:46:28.985692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.985915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.985944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.986926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.986951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 [2024-10-14 13:46:28.987187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:37.218 [2024-10-14 13:46:28.987247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:37.218 [2024-10-14 13:46:28.987217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:37.218 [2024-10-14 13:46:28.987378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.218 [2024-10-14 13:46:28.987793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.218 qpair failed and we were unable to recover it. 00:35:37.218 [2024-10-14 13:46:28.987875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.987902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.987998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.988896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.988987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.989924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.989951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.990972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.990998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.991880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.991909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.219 qpair failed and we were unable to recover it. 00:35:37.219 [2024-10-14 13:46:28.992730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.219 [2024-10-14 13:46:28.992758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.992852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.992881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.992978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.993858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.993974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.994947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.994975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.995095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.995137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.995248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.995275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.220 [2024-10-14 13:46:28.995353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.220 [2024-10-14 13:46:28.995380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.220 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.995495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.995521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.995601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.995628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.995709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.995744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.995843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.995883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.996906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.996933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.997966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.997993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.998907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.998935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:28.999935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:28.999962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.221 [2024-10-14 13:46:29.000038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.221 [2024-10-14 13:46:29.000065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.221 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.222 [2024-10-14 13:46:29.000786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.222 [2024-10-14 13:46:29.000813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.222 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.000940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.000969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.001964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.001992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.002882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.002909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.003000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.003028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.003150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.003178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.003255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.003282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.003368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.003396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.508 qpair failed and we were unable to recover it. 00:35:37.508 [2024-10-14 13:46:29.003510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.508 [2024-10-14 13:46:29.003538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.003624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.003654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.003758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.003789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.003875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.003903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.003988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.004906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.004934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.005961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.005988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.006891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.006977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.007946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.007985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.008083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.008112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.008232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.008260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.008349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.008376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.008462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.008489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.509 [2024-10-14 13:46:29.008567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.509 [2024-10-14 13:46:29.008594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.509 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.008710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.008739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.008833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.008873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.008963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.008991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.009864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.009981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.010916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.010943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.011881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.011922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.012952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.012979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.013068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.013098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.013197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.013229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.013312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.013339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.013455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.013482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.510 [2024-10-14 13:46:29.013596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.510 [2024-10-14 13:46:29.013622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.510 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.013731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.013758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.013846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.013872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.013949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.013977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.014901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.014928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.015970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.015998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.016876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.016982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.017926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.017953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.018041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.018068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.018157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.018185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.018268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.018296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.511 qpair failed and we were unable to recover it. 00:35:37.511 [2024-10-14 13:46:29.018380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.511 [2024-10-14 13:46:29.018409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.018591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.018618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.018699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.018726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.018811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.018836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.018957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.018986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.019965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.019993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.020915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.020955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.021928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.021955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.512 [2024-10-14 13:46:29.022790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.512 qpair failed and we were unable to recover it. 00:35:37.512 [2024-10-14 13:46:29.022874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.022902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.023890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.023978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.024929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.024959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.025946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.025991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.026958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.026987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.513 qpair failed and we were unable to recover it. 00:35:37.513 [2024-10-14 13:46:29.027717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.513 [2024-10-14 13:46:29.027744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.027835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.027863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.027952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.027992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.028920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.028949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.029901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.029928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.030950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.030982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.031955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.031994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.514 [2024-10-14 13:46:29.032732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.514 [2024-10-14 13:46:29.032759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.514 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.032852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.032892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.032988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.033934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.033962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.034861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.034889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.035903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.035930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.036959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.036986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.037126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.037251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.037385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.037525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.515 [2024-10-14 13:46:29.037635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.515 qpair failed and we were unable to recover it. 00:35:37.515 [2024-10-14 13:46:29.037719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.037747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.037860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.037888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.037997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.038923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.038950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.039947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.039974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.040944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.040972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.041969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.041997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.516 [2024-10-14 13:46:29.042758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.516 [2024-10-14 13:46:29.042786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.516 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.042870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.042897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.042974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.043967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.043995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.044857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.044960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.045924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.045951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.046863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.046973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.517 [2024-10-14 13:46:29.047000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.517 qpair failed and we were unable to recover it. 00:35:37.517 [2024-10-14 13:46:29.047090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.047918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.047945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.048934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.048974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.049927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.049954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.050930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.050958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.051918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.051944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.052040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.518 [2024-10-14 13:46:29.052080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.518 qpair failed and we were unable to recover it. 00:35:37.518 [2024-10-14 13:46:29.052191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.052969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.052998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.053949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.053975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.054957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.054985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.055912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.055941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.056913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.056998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.057024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.519 [2024-10-14 13:46:29.057101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.519 [2024-10-14 13:46:29.057133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.519 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.057933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.057961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.058883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.058911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.059899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.059926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.060858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.060899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.061918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.061959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.062056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.520 [2024-10-14 13:46:29.062085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.520 qpair failed and we were unable to recover it. 00:35:37.520 [2024-10-14 13:46:29.062199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.062917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.062944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.063917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.063999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.064934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.064963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.065899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.065991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.066019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.066100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.066133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.066216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.066244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.066330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.521 [2024-10-14 13:46:29.066358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.521 qpair failed and we were unable to recover it. 00:35:37.521 [2024-10-14 13:46:29.066435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.066463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.066547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.066575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.066664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.066693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.066782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.066810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.066899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.066926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.067933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.067962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.068892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.068976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.069893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.069919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.070924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.070950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.071036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.522 [2024-10-14 13:46:29.071064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.522 qpair failed and we were unable to recover it. 00:35:37.522 [2024-10-14 13:46:29.071146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.071867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.071972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.072945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.072972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.073906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.073996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.074934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.074974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.523 qpair failed and we were unable to recover it. 00:35:37.523 [2024-10-14 13:46:29.075886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.523 [2024-10-14 13:46:29.075915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.076896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.076993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.077936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.077966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.078908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.078936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.079964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.079991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.524 [2024-10-14 13:46:29.080643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.524 [2024-10-14 13:46:29.080670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.524 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.080762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.080788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.080876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.080904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.080988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.081921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.081947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.082900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.082930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.083948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.083977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.525 [2024-10-14 13:46:29.084750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.525 [2024-10-14 13:46:29.084777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.525 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.084870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.084899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.084983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.085897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.085938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.086961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.086988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.087934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.087965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.088894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.088976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.089003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.089089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.089137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.089236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.089263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.089349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.089376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.089468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.526 [2024-10-14 13:46:29.089497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.526 qpair failed and we were unable to recover it. 00:35:37.526 [2024-10-14 13:46:29.089581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.089608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.089687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.089714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.089807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.089835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.089925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.089952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.090889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.090974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.091948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.091976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.092917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.092944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.093889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.093981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.094008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.094086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.094112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.527 [2024-10-14 13:46:29.094206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.527 [2024-10-14 13:46:29.094238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.527 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.094932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.094959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.095876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.095904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.096909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.096999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.097939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.097967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.098907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.098982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.528 [2024-10-14 13:46:29.099009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.528 qpair failed and we were unable to recover it. 00:35:37.528 [2024-10-14 13:46:29.099103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.099922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.099950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.100892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.100918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.101922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.101950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.102955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.102982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.529 [2024-10-14 13:46:29.103783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.529 [2024-10-14 13:46:29.103810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.529 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.103894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.103923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.104947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.104974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:37.530 [2024-10-14 13:46:29.105055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:37.530 [2024-10-14 13:46:29.105161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.105276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:37.530 [2024-10-14 13:46:29.105389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.530 [2024-10-14 13:46:29.105527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.530 [2024-10-14 13:46:29.105631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.105756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.105899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.105928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.106906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.106933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.530 [2024-10-14 13:46:29.107846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.530 [2024-10-14 13:46:29.107875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.530 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.107963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.107990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.108883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.108911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.109886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.109932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.110935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.110968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.111867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.111907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.112002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.112032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.112141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.112182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.531 [2024-10-14 13:46:29.112274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.531 [2024-10-14 13:46:29.112301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.531 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.112946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.112975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.113949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.113980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.114890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.114916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.115915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.115942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.532 [2024-10-14 13:46:29.116802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.532 qpair failed and we were unable to recover it. 00:35:37.532 [2024-10-14 13:46:29.116896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.116923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.117955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.117983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.118935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.118962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.119874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.119901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.120949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.120975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.121071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.121098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.121202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.121231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.121317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.121343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.121443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.121470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.533 [2024-10-14 13:46:29.121549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.533 [2024-10-14 13:46:29.121575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.533 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.121670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.121696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.121785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.121810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.121899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.121928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.122900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.122928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.534 [2024-10-14 13:46:29.123587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:37.534 [2024-10-14 13:46:29.123727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.534 [2024-10-14 13:46:29.123863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.123904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.534 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.123991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.124887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.124982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.125009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.125097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.125150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.125237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.125263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.534 qpair failed and we were unable to recover it. 00:35:37.534 [2024-10-14 13:46:29.125352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.534 [2024-10-14 13:46:29.125378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.125470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.125496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.125590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.125616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.125706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.125752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.125850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.125881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.125969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.125998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.126910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.126950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.127906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.127988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.128881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.128985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.535 qpair failed and we were unable to recover it. 00:35:37.535 [2024-10-14 13:46:29.129965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.535 [2024-10-14 13:46:29.129992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.130927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.130956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.131941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.131967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.132881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.132922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.133891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.133978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.536 qpair failed and we were unable to recover it. 00:35:37.536 [2024-10-14 13:46:29.134705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.536 [2024-10-14 13:46:29.134730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.134815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.134843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.134958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.134987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.135915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.135941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.136951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.136978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.137893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.137919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.138885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.138976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.139003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.139090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.139118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.139211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.537 [2024-10-14 13:46:29.139239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.537 qpair failed and we were unable to recover it. 00:35:37.537 [2024-10-14 13:46:29.139318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.139423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.139544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.139690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.139800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.139910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.139939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.140972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.140999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.141892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.141923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.142905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.142933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.538 [2024-10-14 13:46:29.143890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.538 [2024-10-14 13:46:29.143916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.538 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.143991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.144929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.144963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.145884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.145978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.146907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.146993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.147910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.147937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.539 [2024-10-14 13:46:29.148029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.539 [2024-10-14 13:46:29.148075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.539 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.148900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.148985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.149918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.149947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.150936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.150964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.151888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.151973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.540 [2024-10-14 13:46:29.152726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.540 qpair failed and we were unable to recover it. 00:35:37.540 [2024-10-14 13:46:29.152818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.152846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.152925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.152953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.153953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.153982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.154949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.154977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.155915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.155995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.156917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.156943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.157022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.157048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.157163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.157202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.541 [2024-10-14 13:46:29.157295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.541 [2024-10-14 13:46:29.157323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.541 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.157442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.157545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.157650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.157763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.157879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.157972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.158922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.158969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.159911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.159938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.160898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.160992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.161941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.161969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.162057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.542 [2024-10-14 13:46:29.162086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.542 qpair failed and we were unable to recover it. 00:35:37.542 [2024-10-14 13:46:29.162191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.162904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.162983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.163879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.163907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.164958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.164998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.543 [2024-10-14 13:46:29.165918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.543 qpair failed and we were unable to recover it. 00:35:37.543 [2024-10-14 13:46:29.165994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.166909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.166954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.167842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.167872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.168967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.168994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 Malloc0 00:35:37.544 [2024-10-14 13:46:29.169262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.544 [2024-10-14 13:46:29.169629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.169715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:37.544 [2024-10-14 13:46:29.169830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.544 [2024-10-14 13:46:29.169940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.169967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.544 [2024-10-14 13:46:29.170083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.170114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.170205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.170232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.170317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.170344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.170425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.170453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.544 qpair failed and we were unable to recover it. 00:35:37.544 [2024-10-14 13:46:29.170535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.544 [2024-10-14 13:46:29.170562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.170675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.170701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.170790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.170817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.170893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.170920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.171910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.171937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.172902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.172960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.545 [2024-10-14 13:46:29.172992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.173897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.173982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.174927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.174959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.175066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.175093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.175180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.175207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.175287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.175315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.545 qpair failed and we were unable to recover it. 00:35:37.545 [2024-10-14 13:46:29.175401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.545 [2024-10-14 13:46:29.175428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.175513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.175540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.175651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.175677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.175760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.175787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.175871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.175898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.175985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.176888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.176916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.177910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.177995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.178957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.178985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.179888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.179984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.180024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.180109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.546 [2024-10-14 13:46:29.180150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.546 qpair failed and we were unable to recover it. 00:35:37.546 [2024-10-14 13:46:29.180241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.180947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.180982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.181079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.547 [2024-10-14 13:46:29.181199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.181316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:37.547 [2024-10-14 13:46:29.181435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.547 [2024-10-14 13:46:29.181541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.547 [2024-10-14 13:46:29.181653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.181771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.181894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.181934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.182888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.182975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.183884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.547 qpair failed and we were unable to recover it. 00:35:37.547 [2024-10-14 13:46:29.183997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.547 [2024-10-14 13:46:29.184026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.184932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.184959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.185872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.185900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.186918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.186999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.187952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.187979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.548 qpair failed and we were unable to recover it. 00:35:37.548 [2024-10-14 13:46:29.188641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.548 [2024-10-14 13:46:29.188667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.188756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.188785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.188867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.188894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.188986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.189102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.549 [2024-10-14 13:46:29.189237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.189369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.549 [2024-10-14 13:46:29.189398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.189480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.549 [2024-10-14 13:46:29.189614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.549 [2024-10-14 13:46:29.189724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.189839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.189955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.189982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.190927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.190954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.191915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.191997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.192932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.192961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.193046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.193075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.549 [2024-10-14 13:46:29.193165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.549 [2024-10-14 13:46:29.193193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.549 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.193891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.193976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.194903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.194931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.195905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.195934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.196903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.196987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.197022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.197121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.197157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.197248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.550 [2024-10-14 13:46:29.197275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 [2024-10-14 13:46:29.197358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.197385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.550 [2024-10-14 13:46:29.197495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.197522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.550 [2024-10-14 13:46:29.197607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.550 [2024-10-14 13:46:29.197634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.550 qpair failed and we were unable to recover it. 00:35:37.550 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.550 [2024-10-14 13:46:29.197726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.197755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.197844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.197871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.197949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.197976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.198918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.198945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.199885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9990000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.199991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9994000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f999c000b90 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.200922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.200948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.201030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.551 [2024-10-14 13:46:29.201057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c9340 with addr=10.0.0.2, port=4420 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.201406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.551 [2024-10-14 13:46:29.203818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.551 [2024-10-14 13:46:29.203937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.551 [2024-10-14 13:46:29.203971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.551 [2024-10-14 13:46:29.203988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.551 [2024-10-14 13:46:29.204001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.551 [2024-10-14 13:46:29.204035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.551 13:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 400337 00:35:37.551 [2024-10-14 13:46:29.213648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.551 [2024-10-14 13:46:29.213754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.551 [2024-10-14 13:46:29.213782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.551 [2024-10-14 13:46:29.213798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.551 [2024-10-14 13:46:29.213810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.551 [2024-10-14 13:46:29.213840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.551 [2024-10-14 13:46:29.223608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.551 [2024-10-14 13:46:29.223702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.551 [2024-10-14 13:46:29.223728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.551 [2024-10-14 13:46:29.223743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.551 [2024-10-14 13:46:29.223756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.551 [2024-10-14 13:46:29.223785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.551 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.233754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.233846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.233871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.233886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.233898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.233929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.243603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.243698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.243724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.243740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.243752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.243780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.253754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.253847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.253872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.253887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.253899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.253928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.263674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.263766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.263792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.263807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.263819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.263847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.273695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.273799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.273825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.273840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.273852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.273881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.283709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.283838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.283870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.283887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.283900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.283928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.293791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.293880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.293905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.293919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.293932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.293961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.303796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.303884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.303911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.303928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.303941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.303971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.313803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.313909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.313934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.313948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.313961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.313989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.552 [2024-10-14 13:46:29.323831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.552 [2024-10-14 13:46:29.323928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.552 [2024-10-14 13:46:29.323954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.552 [2024-10-14 13:46:29.323969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.552 [2024-10-14 13:46:29.323982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.552 [2024-10-14 13:46:29.324010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.552 qpair failed and we were unable to recover it. 00:35:37.553 [2024-10-14 13:46:29.333837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.553 [2024-10-14 13:46:29.333920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.553 [2024-10-14 13:46:29.333948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.553 [2024-10-14 13:46:29.333963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.553 [2024-10-14 13:46:29.333975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.553 [2024-10-14 13:46:29.334005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.553 qpair failed and we were unable to recover it. 00:35:37.811 [2024-10-14 13:46:29.343899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.811 [2024-10-14 13:46:29.343997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.811 [2024-10-14 13:46:29.344026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.811 [2024-10-14 13:46:29.344041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.344053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.344082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.353929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.354034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.354061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.354076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.354088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.354117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.363951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.364045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.364070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.364085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.364098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.364126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.373951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.374038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.374072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.374087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.374100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.374137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.384003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.384086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.384111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.384126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.384148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.384178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.394017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.394143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.394170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.394185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.394198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.394227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.404027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.404114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.404148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.404164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.404177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.404205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.414183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.414279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.414307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.414322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.414335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.414367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.424112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.424223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.424251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.424266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.424279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.424307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.434159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.434257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.434283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.434298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.434310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.434341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.444158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.444252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.444277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.444291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.444304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.444333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.454200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.454290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.454315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.454329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.454342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.454372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.464230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.464378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.464410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.464433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.464446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.464475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.474295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.474397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.474423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.474445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.474458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.474486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.484282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.484396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.484426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.484441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.484454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.484482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.494332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.494452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.494478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.494493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.494506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.494535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.504334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.504427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.504452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.504466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.504479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.504512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.514408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.514515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.514541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.514555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.514568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.514596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.524375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.524473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.524499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.524514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.524526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.524554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.534433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.534522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.534550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.534566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.534579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.534608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.544429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.544516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.544541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.544556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.544569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.544597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.554505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.554625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.554657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.554673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.554686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.554715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.564496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.564617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.564642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.564657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.564670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.564698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.574551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.574633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.574659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.574673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.574686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.574714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.584595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.584682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.584707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.584721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.584734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.584762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.594594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.594686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.594711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.594725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.594738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.594773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.604657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.604757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.604783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.604798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.812 [2024-10-14 13:46:29.604811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.812 [2024-10-14 13:46:29.604838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.812 qpair failed and we were unable to recover it. 00:35:37.812 [2024-10-14 13:46:29.614665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.812 [2024-10-14 13:46:29.614791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.812 [2024-10-14 13:46:29.614818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.812 [2024-10-14 13:46:29.614833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.614845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.614874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:37.813 [2024-10-14 13:46:29.624691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.813 [2024-10-14 13:46:29.624783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.813 [2024-10-14 13:46:29.624809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.813 [2024-10-14 13:46:29.624823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.624836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.624864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:37.813 [2024-10-14 13:46:29.634736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.813 [2024-10-14 13:46:29.634849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.813 [2024-10-14 13:46:29.634876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.813 [2024-10-14 13:46:29.634891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.634904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.634932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:37.813 [2024-10-14 13:46:29.644725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.813 [2024-10-14 13:46:29.644823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.813 [2024-10-14 13:46:29.644856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.813 [2024-10-14 13:46:29.644872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.644885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.644913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:37.813 [2024-10-14 13:46:29.654742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.813 [2024-10-14 13:46:29.654829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.813 [2024-10-14 13:46:29.654854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.813 [2024-10-14 13:46:29.654868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.654882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.654910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:37.813 [2024-10-14 13:46:29.664780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:37.813 [2024-10-14 13:46:29.664875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:37.813 [2024-10-14 13:46:29.664902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:37.813 [2024-10-14 13:46:29.664918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:37.813 [2024-10-14 13:46:29.664931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:37.813 [2024-10-14 13:46:29.664961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:37.813 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.674852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.674956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.674984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.675000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.675012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.675043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.684852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.684952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.684979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.684994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.685007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.685040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.694870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.694965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.694991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.695005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.695018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.695046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.704901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.704990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.705015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.705030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.705043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.705071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.714947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.715041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.715066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.715080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.715093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.715122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.724965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.725053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.725078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.725092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.725105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.725142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.735004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.735108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.735148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.735164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.735178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.735206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.745059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.745160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.745187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.745202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.745215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.745244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.755041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.755149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.755176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.755192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.755204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.755233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.765064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.765192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.765219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.765234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.765247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.765275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.775120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.775217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.775242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.775256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.775269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.072 [2024-10-14 13:46:29.775303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.072 qpair failed and we were unable to recover it. 00:35:38.072 [2024-10-14 13:46:29.785108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.072 [2024-10-14 13:46:29.785206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.072 [2024-10-14 13:46:29.785231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.072 [2024-10-14 13:46:29.785246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.072 [2024-10-14 13:46:29.785258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.785287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.795158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.795251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.795276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.795290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.795303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.795331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.805173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.805289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.805315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.805331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.805343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.805372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.815211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.815303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.815330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.815347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.815360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.815389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.825239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.825323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.825356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.825372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.825385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.825414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.835347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.835481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.835507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.835522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.835534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.835562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.845292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.845380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.845405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.845422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.845434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.845462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.855355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.855446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.855471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.855485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.855498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.855527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.865342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.865426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.865451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.865466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.865479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.865512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.875474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.875581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.875606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.875631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.875643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.875673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.885411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.885506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.885531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.885546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.885560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.885590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.895507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.895599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.895623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.895638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.895650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.895678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.905504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.905627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.905654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.905669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.905681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.905709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.915502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.915602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.073 [2024-10-14 13:46:29.915634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.073 [2024-10-14 13:46:29.915650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.073 [2024-10-14 13:46:29.915662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.073 [2024-10-14 13:46:29.915690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.073 qpair failed and we were unable to recover it. 00:35:38.073 [2024-10-14 13:46:29.925530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.073 [2024-10-14 13:46:29.925635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.074 [2024-10-14 13:46:29.925663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.074 [2024-10-14 13:46:29.925677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.074 [2024-10-14 13:46:29.925690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.074 [2024-10-14 13:46:29.925720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.074 qpair failed and we were unable to recover it. 00:35:38.333 [2024-10-14 13:46:29.935551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.333 [2024-10-14 13:46:29.935681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.333 [2024-10-14 13:46:29.935709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.333 [2024-10-14 13:46:29.935725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.333 [2024-10-14 13:46:29.935738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.333 [2024-10-14 13:46:29.935767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.333 qpair failed and we were unable to recover it. 00:35:38.333 [2024-10-14 13:46:29.945605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.333 [2024-10-14 13:46:29.945696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.333 [2024-10-14 13:46:29.945721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.333 [2024-10-14 13:46:29.945736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.333 [2024-10-14 13:46:29.945749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.333 [2024-10-14 13:46:29.945777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.333 qpair failed and we were unable to recover it. 00:35:38.333 [2024-10-14 13:46:29.955651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.333 [2024-10-14 13:46:29.955759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.333 [2024-10-14 13:46:29.955786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.333 [2024-10-14 13:46:29.955800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.333 [2024-10-14 13:46:29.955819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.333 [2024-10-14 13:46:29.955848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.333 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:29.965664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:29.965765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:29.965793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:29.965808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:29.965821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:29.965850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:29.975725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:29.975840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:29.975867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:29.975882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:29.975895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:29.975924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:29.985686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:29.985794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:29.985821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:29.985837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:29.985849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:29.985877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:29.996011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:29.996157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:29.996184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:29.996198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:29.996211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:29.996240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.005837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.005940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.005967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.005982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.005995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.006024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.016011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.016142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.016172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.016187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.016200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.016229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.025873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.026001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.026028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.026044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.026058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.026086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.035859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.035949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.035974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.035989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.036002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.036039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.045900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.046023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.046052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.046068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.046089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.046124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.055946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.056043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.056070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.056085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.056098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.056141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.065946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.066030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.066065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.066080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.066093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.066121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.075968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.076070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.076095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.076120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.076142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.076171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.085978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.086083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.086120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.086145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.086159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.086188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.334 qpair failed and we were unable to recover it. 00:35:38.334 [2024-10-14 13:46:30.095990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.334 [2024-10-14 13:46:30.096084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.334 [2024-10-14 13:46:30.096108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.334 [2024-10-14 13:46:30.096125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.334 [2024-10-14 13:46:30.096147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.334 [2024-10-14 13:46:30.096176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.106022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.106112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.106144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.106159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.106177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.106206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.116073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.116190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.116215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.116229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.116242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.116270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.126115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.126221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.126246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.126261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.126273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.126301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.136173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.136319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.136343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.136358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.136380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.136410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.146146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.146238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.146262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.146276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.146289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.146317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.156213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.156322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.156348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.156363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.156376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.156404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.166201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.166338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.166365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.166381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.166393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.166422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.176227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.176308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.176333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.176347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.176359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.176387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.335 [2024-10-14 13:46:30.186286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.335 [2024-10-14 13:46:30.186375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.335 [2024-10-14 13:46:30.186405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.335 [2024-10-14 13:46:30.186422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.335 [2024-10-14 13:46:30.186435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.335 [2024-10-14 13:46:30.186466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.335 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.196294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.196387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.196415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.196430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.196444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.196473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.206318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.206414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.206440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.206455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.206468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.206497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.216358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.216450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.216476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.216490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.216503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.216532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.226372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.226452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.226477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.226492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.226511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.226540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.236425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.236518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.236544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.236559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.236571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.236600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.246451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.246542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.246568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.246583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.246596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.246624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.256476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.256605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.256630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.256645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.256657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.256686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.266495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.266578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.266603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.266618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.594 [2024-10-14 13:46:30.266630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.594 [2024-10-14 13:46:30.266658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.594 qpair failed and we were unable to recover it. 00:35:38.594 [2024-10-14 13:46:30.276516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.594 [2024-10-14 13:46:30.276608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.594 [2024-10-14 13:46:30.276634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.594 [2024-10-14 13:46:30.276648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.276661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.276689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.286564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.286654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.286679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.286693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.286706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.286735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.296558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.296677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.296702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.296716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.296729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.296758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.306648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.306728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.306754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.306769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.306782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.306810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.316625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.316748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.316773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.316788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.316806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.316836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.326642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.326730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.326755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.326769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.326783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.326811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.336700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.336785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.336810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.336824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.336837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.336865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.346728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.346811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.346836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.346851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.346863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.346891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.356776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.356891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.356916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.356931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.356944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.356972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.366778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.366912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.366938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.366953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.366966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.366995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.376796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.376880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.376905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.376920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.376932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.376961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.386865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.386947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.386972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.386987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.387000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.387027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.396869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.396959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.396984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.396999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.397012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.397041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.406890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.407023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.407049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.407069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.595 [2024-10-14 13:46:30.407083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.595 [2024-10-14 13:46:30.407111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.595 qpair failed and we were unable to recover it. 00:35:38.595 [2024-10-14 13:46:30.416940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.595 [2024-10-14 13:46:30.417018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.595 [2024-10-14 13:46:30.417044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.595 [2024-10-14 13:46:30.417059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.596 [2024-10-14 13:46:30.417072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.596 [2024-10-14 13:46:30.417100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.596 qpair failed and we were unable to recover it. 00:35:38.596 [2024-10-14 13:46:30.426968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.596 [2024-10-14 13:46:30.427089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.596 [2024-10-14 13:46:30.427114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.596 [2024-10-14 13:46:30.427139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.596 [2024-10-14 13:46:30.427155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.596 [2024-10-14 13:46:30.427184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.596 qpair failed and we were unable to recover it. 00:35:38.596 [2024-10-14 13:46:30.436995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.596 [2024-10-14 13:46:30.437084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.596 [2024-10-14 13:46:30.437108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.596 [2024-10-14 13:46:30.437123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.596 [2024-10-14 13:46:30.437145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.596 [2024-10-14 13:46:30.437174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.596 qpair failed and we were unable to recover it. 00:35:38.596 [2024-10-14 13:46:30.446992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.596 [2024-10-14 13:46:30.447082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.596 [2024-10-14 13:46:30.447110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.596 [2024-10-14 13:46:30.447125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.596 [2024-10-14 13:46:30.447153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.596 [2024-10-14 13:46:30.447184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.596 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.457013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.457100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.457136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.457154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.457167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.457197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.467087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.467181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.467207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.467223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.467236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.467264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.477077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.477180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.477205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.477220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.477233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.477261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.487180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.487323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.487351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.487366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.487379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.487408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.497154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.497246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.497272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.497292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.497306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.497335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.507233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.507324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.507349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.507364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.507377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.507405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.517234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.517334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.517359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.517373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.517385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.517414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.855 [2024-10-14 13:46:30.527212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.855 [2024-10-14 13:46:30.527335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.855 [2024-10-14 13:46:30.527360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.855 [2024-10-14 13:46:30.527375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.855 [2024-10-14 13:46:30.527388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.855 [2024-10-14 13:46:30.527415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.855 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.537232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.537328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.537352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.537367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.537380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.537408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.547281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.547374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.547400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.547415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.547428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.547456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.557338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.557442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.557466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.557480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.557493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.557522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.567333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.567427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.567452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.567468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.567480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.567509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.577386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.577525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.577549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.577564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.577577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.577606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.587487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.587583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.587611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.587634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.587648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.587679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.597449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.597570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.597595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.597609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.597622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.597651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.607470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.607561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.607587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.607602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.607615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.607643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.617463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.617545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.617570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.617585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.617597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.617626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.627488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.627616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.627644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.627662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.627675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.627704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.637578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.637690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.637716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.637731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.637744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.637772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.647588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.647717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.647742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.647757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.647770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.647799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.657609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.657702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.657727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.657741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.657755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.657783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.667613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.856 [2024-10-14 13:46:30.667702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.856 [2024-10-14 13:46:30.667728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.856 [2024-10-14 13:46:30.667742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.856 [2024-10-14 13:46:30.667756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.856 [2024-10-14 13:46:30.667785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.856 qpair failed and we were unable to recover it. 00:35:38.856 [2024-10-14 13:46:30.677690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.857 [2024-10-14 13:46:30.677780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.857 [2024-10-14 13:46:30.677805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.857 [2024-10-14 13:46:30.677826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.857 [2024-10-14 13:46:30.677839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.857 [2024-10-14 13:46:30.677868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.857 qpair failed and we were unable to recover it. 00:35:38.857 [2024-10-14 13:46:30.687699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.857 [2024-10-14 13:46:30.687783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.857 [2024-10-14 13:46:30.687809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.857 [2024-10-14 13:46:30.687824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.857 [2024-10-14 13:46:30.687837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.857 [2024-10-14 13:46:30.687868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.857 qpair failed and we were unable to recover it. 00:35:38.857 [2024-10-14 13:46:30.697701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.857 [2024-10-14 13:46:30.697787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.857 [2024-10-14 13:46:30.697813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.857 [2024-10-14 13:46:30.697828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.857 [2024-10-14 13:46:30.697841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.857 [2024-10-14 13:46:30.697869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.857 qpair failed and we were unable to recover it. 00:35:38.857 [2024-10-14 13:46:30.707761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:38.857 [2024-10-14 13:46:30.707885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:38.857 [2024-10-14 13:46:30.707912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:38.857 [2024-10-14 13:46:30.707928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:38.857 [2024-10-14 13:46:30.707940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:38.857 [2024-10-14 13:46:30.707969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.857 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.717811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.717907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.717935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.717950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.717963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.717992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.727815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.727909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.727935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.727950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.727963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.727991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.737804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.737916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.737941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.737955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.737968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.737997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.747890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.748001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.748026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.748041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.748054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.748082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.757920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.758058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.758084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.758099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.758112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.758149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.767910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.768001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.768026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.768046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.768060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.768089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.777956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.778057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.778082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.778097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.778110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.778144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.787958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.788044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.788069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.788084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.788097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.788125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.798039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.798159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.798185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.798199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.798211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.798240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.808024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.808115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.808148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.808164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.808178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.808206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.818066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.818157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.818183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.818199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.818213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.818244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.828093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.828191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.828216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.828231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.828244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.828272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.838150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.117 [2024-10-14 13:46:30.838242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.117 [2024-10-14 13:46:30.838267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.117 [2024-10-14 13:46:30.838282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.117 [2024-10-14 13:46:30.838295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.117 [2024-10-14 13:46:30.838324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.117 qpair failed and we were unable to recover it. 00:35:39.117 [2024-10-14 13:46:30.848159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.848254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.848279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.848294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.848307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.848336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.858200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.858284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.858310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.858331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.858345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.858376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.868199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.868284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.868310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.868325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.868337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.868366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.878236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.878335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.878360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.878374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.878387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.878416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.888326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.888424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.888452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.888469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.888482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.888512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.898305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.898398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.898425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.898439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.898452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.898480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.908320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.908406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.908431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.908446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.908459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.908487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.918372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.918462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.918488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.918502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.918515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.918543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.928357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.928476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.928502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.928516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.928529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.928557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.938393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.938477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.938503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.938517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.938529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.938558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.948425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.948507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.948540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.948556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.948568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.948597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.958488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.958574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.958599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.958614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.958627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.958656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.118 [2024-10-14 13:46:30.968493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.118 [2024-10-14 13:46:30.968588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.118 [2024-10-14 13:46:30.968615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.118 [2024-10-14 13:46:30.968631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.118 [2024-10-14 13:46:30.968644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.118 [2024-10-14 13:46:30.968673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.118 qpair failed and we were unable to recover it. 00:35:39.378 [2024-10-14 13:46:30.978518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.378 [2024-10-14 13:46:30.978639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.378 [2024-10-14 13:46:30.978668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.378 [2024-10-14 13:46:30.978683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.378 [2024-10-14 13:46:30.978696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.378 [2024-10-14 13:46:30.978725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.378 qpair failed and we were unable to recover it. 00:35:39.378 [2024-10-14 13:46:30.988554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:30.988635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:30.988661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:30.988675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:30.988688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:30.988719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:30.998610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:30.998738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:30.998764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:30.998779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:30.998792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:30.998821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.008611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.008697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.008722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.008737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.008750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.008778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.018636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.018753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.018777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.018792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.018805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.018834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.028623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.028705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.028730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.028744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.028758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.028786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.038678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.038765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.038794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.038810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.038822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.038850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.048700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.048831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.048859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.048874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.048887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.048918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.058774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.058856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.058881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.058896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.058909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.058937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.068739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.068822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.068847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.068862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.068874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.068903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.078824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.078931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.078955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.078969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.078982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.079010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.088833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.088912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.088938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.088952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.088965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.088993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.098840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.098922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.098947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.098961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.098974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.099003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.108857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.108954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.108979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.108994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.109007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.109035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.379 [2024-10-14 13:46:31.118936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.379 [2024-10-14 13:46:31.119024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.379 [2024-10-14 13:46:31.119048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.379 [2024-10-14 13:46:31.119063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.379 [2024-10-14 13:46:31.119076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.379 [2024-10-14 13:46:31.119104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.379 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.128944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.129027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.129057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.129072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.129086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.129114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.139023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.139106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.139139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.139156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.139180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.139208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.148996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.149073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.149099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.149113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.149126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.149163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.159066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.159166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.159193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.159208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.159220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.159249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.169054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.169189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.169214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.169230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.169243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.169277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.179092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.179248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.179275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.179290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.179304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.179333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.189123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.189219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.189244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.189258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.189271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.189299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.199198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.199298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.199323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.199337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.199350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.199378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.209202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.209304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.209330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.209345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.209358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.209387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.219188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.219277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.219307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.219322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.219334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.219363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.380 [2024-10-14 13:46:31.229274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.380 [2024-10-14 13:46:31.229364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.380 [2024-10-14 13:46:31.229391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.380 [2024-10-14 13:46:31.229406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.380 [2024-10-14 13:46:31.229427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.380 [2024-10-14 13:46:31.229458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.380 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.239288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.239380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.239416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.239432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.239445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.239476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.249275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.249364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.249390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.249406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.249423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.249452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.259330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.259468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.259495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.259509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.259522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.259556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.269327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.269458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.269485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.269500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.269512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.269540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.279383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.279508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.279532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.279547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.279559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.279587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.289424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.289546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.289573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.289588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.289600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.289629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.299438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.299526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.299551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.299565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.299578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.299607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.309521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.309617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.309648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.309664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.309676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.309705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.319501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.319608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.319643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.319657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.319670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.319698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.329509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.329597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.329622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.329637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.329650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.329678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.339576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.339665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.339690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.339705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.339718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.339746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.349553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.349682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.349708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.349722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.349735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.349768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.359619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.359716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.359742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.640 [2024-10-14 13:46:31.359757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.640 [2024-10-14 13:46:31.359769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.640 [2024-10-14 13:46:31.359798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.640 qpair failed and we were unable to recover it. 00:35:39.640 [2024-10-14 13:46:31.369639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.640 [2024-10-14 13:46:31.369733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.640 [2024-10-14 13:46:31.369759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.369773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.369785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.369813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.379667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.379784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.379810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.379825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.379837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.379865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.389659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.389759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.389784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.389799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.389812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.389840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.399766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.399879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.399910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.399926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.399938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.399966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.409768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.409876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.409903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.409918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.409931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.409959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.419792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.419899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.419925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.419940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.419952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.419981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.429798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.429890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.429915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.429929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.429942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.429970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.439829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.439929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.439954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.439969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.439982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.440015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.449863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.449981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.450008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.450024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.450037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.450064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.459879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.459993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.460020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.460034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.460047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.460075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.469884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.469968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.469993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.470007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.470020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.470048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.479937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.480034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.480059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.480073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.480086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.480115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.641 [2024-10-14 13:46:31.490017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.641 [2024-10-14 13:46:31.490135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.641 [2024-10-14 13:46:31.490170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.641 [2024-10-14 13:46:31.490186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.641 [2024-10-14 13:46:31.490199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.641 [2024-10-14 13:46:31.490229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.641 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.499989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.500084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.500123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.500149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.500162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.500192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.510012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.510157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.510185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.510200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.510214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.510243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.520082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.520194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.520219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.520234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.520246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.520275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.530097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.530200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.530226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.530240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.530253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.530287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.540097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.540190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.540215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.540230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.540243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.540271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.550109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.550208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.550233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.550248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.550261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.550289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.560171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.901 [2024-10-14 13:46:31.560261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.901 [2024-10-14 13:46:31.560286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.901 [2024-10-14 13:46:31.560300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.901 [2024-10-14 13:46:31.560312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.901 [2024-10-14 13:46:31.560341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.901 qpair failed and we were unable to recover it. 00:35:39.901 [2024-10-14 13:46:31.570197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.570285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.570310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.570325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.570337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.570366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.580215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.580307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.580336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.580351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.580364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.580392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.590264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.590380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.590417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.590432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.590445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.590473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.600289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.600383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.600408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.600423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.600435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.600464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.610302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.610396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.610420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.610435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.610449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.610477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.620327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.620414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.620438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.620453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.620473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.620503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.630369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.630457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.630481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.630496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.630508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.630536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.640428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.640524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.640548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.640563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.640575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.640604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.650468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.650584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.650610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.650625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.650637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.650666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.660452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.660541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.660566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.660581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.660593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.660621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.670490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.670590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.670620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.670637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.670649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.670679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.680522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.680621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.680647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.680662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.680675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.680704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.690524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.690619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.690645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.690660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.690672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.690700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.700540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.700628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.700652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.902 [2024-10-14 13:46:31.700667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.902 [2024-10-14 13:46:31.700679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.902 [2024-10-14 13:46:31.700708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.902 qpair failed and we were unable to recover it. 00:35:39.902 [2024-10-14 13:46:31.710564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.902 [2024-10-14 13:46:31.710651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.902 [2024-10-14 13:46:31.710675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.903 [2024-10-14 13:46:31.710689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.903 [2024-10-14 13:46:31.710707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.903 [2024-10-14 13:46:31.710737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.903 qpair failed and we were unable to recover it. 00:35:39.903 [2024-10-14 13:46:31.720644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.903 [2024-10-14 13:46:31.720765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.903 [2024-10-14 13:46:31.720792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.903 [2024-10-14 13:46:31.720806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.903 [2024-10-14 13:46:31.720819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.903 [2024-10-14 13:46:31.720847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.903 qpair failed and we were unable to recover it. 00:35:39.903 [2024-10-14 13:46:31.730663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.903 [2024-10-14 13:46:31.730765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.903 [2024-10-14 13:46:31.730792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.903 [2024-10-14 13:46:31.730807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.903 [2024-10-14 13:46:31.730819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.903 [2024-10-14 13:46:31.730848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.903 qpair failed and we were unable to recover it. 00:35:39.903 [2024-10-14 13:46:31.740664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.903 [2024-10-14 13:46:31.740768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.903 [2024-10-14 13:46:31.740794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.903 [2024-10-14 13:46:31.740809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.903 [2024-10-14 13:46:31.740822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.903 [2024-10-14 13:46:31.740850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.903 qpair failed and we were unable to recover it. 00:35:39.903 [2024-10-14 13:46:31.750690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:39.903 [2024-10-14 13:46:31.750790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:39.903 [2024-10-14 13:46:31.750826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:39.903 [2024-10-14 13:46:31.750852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:39.903 [2024-10-14 13:46:31.750876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:39.903 [2024-10-14 13:46:31.750921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:39.903 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.760736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.760832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.760859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.760874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.760886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.760916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.770776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.770868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.770894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.770908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.770921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.770949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.780808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.780898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.780923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.780938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.780951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.780981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.790794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.790879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.790904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.790918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.790930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.790958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.800840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.800945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.800972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.800986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.801005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.801035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.810869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.810963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.810988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.811003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.811015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.811044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.820873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.820962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.820990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.821004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.821017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.821046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.830933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.831017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.831042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.831056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.831069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.831097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.840952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.841045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.841069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.841083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.841096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.841124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.851003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.851108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.851154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.851169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.851181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.851210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.861012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.861102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.861137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.861155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.162 [2024-10-14 13:46:31.861168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.162 [2024-10-14 13:46:31.861197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.162 qpair failed and we were unable to recover it. 00:35:40.162 [2024-10-14 13:46:31.871020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.162 [2024-10-14 13:46:31.871108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.162 [2024-10-14 13:46:31.871140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.162 [2024-10-14 13:46:31.871156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.871169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.871197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.881072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.881175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.881202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.881216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.881229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.881257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.891093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.891195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.891223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.891238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.891257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.891287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.901119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.901220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.901245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.901260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.901273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.901301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.911170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.911256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.911281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.911295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.911308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.911336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.921205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.921340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.921366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.921380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.921393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.921421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.931240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.931336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.931361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.931376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.931389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.931417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.941278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.941372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.941397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.941412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.941425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.941453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.951258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.951385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.951412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.951427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.951440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.951468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.961332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.961444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.961470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.961485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.961497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.961525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.971309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.971394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.971419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.971433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.971446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.971474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.981336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.981425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.981450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.981464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.981482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.981511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:31.991391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:31.991516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:31.991542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:31.991557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:31.991570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:31.991598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:32.001410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:32.001502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:32.001526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:32.001541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:32.001553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.163 [2024-10-14 13:46:32.001581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.163 qpair failed and we were unable to recover it. 00:35:40.163 [2024-10-14 13:46:32.011592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.163 [2024-10-14 13:46:32.011698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.163 [2024-10-14 13:46:32.011723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.163 [2024-10-14 13:46:32.011739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.163 [2024-10-14 13:46:32.011751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.164 [2024-10-14 13:46:32.011779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.164 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.021492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.021578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.021605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.021621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.021634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.021664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.031578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.031679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.031706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.031721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.031734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.031762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.041608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.041712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.041738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.041753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.041766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.041794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.051551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.051660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.051687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.051702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.051715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.051744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.061562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.061650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.061674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.061689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.061701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.061729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.071595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.071681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.071705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.071725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.071738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.071767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.081698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.081792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.081816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.081830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.081843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.081871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.091658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.091751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.091776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.091791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.091804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.091832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.101685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.101778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.101802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.101817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.101830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.101859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.111701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.111782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.111807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.111821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.111834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.111862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.121785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.121879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.121904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.121919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.424 [2024-10-14 13:46:32.121932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.424 [2024-10-14 13:46:32.121963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.424 qpair failed and we were unable to recover it. 00:35:40.424 [2024-10-14 13:46:32.131782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.424 [2024-10-14 13:46:32.131877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.424 [2024-10-14 13:46:32.131902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.424 [2024-10-14 13:46:32.131918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.131930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.131959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.141794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.141880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.141904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.141919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.141931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.141959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.151808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.151893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.151919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.151933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.151952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.151980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.161853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.161948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.161972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.161992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.162005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.162033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.171893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.171997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.172031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.172047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.172060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.172089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.181912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.182003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.182027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.182041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.182054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.182082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.191935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.192049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.192075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.192090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.192103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.192139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.201967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.202111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.202147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.202163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.202176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.202205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.211987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.212076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.212101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.212115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.212138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.212169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.222017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.222101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.222126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.222152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.222166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.222203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.232024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.232110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.232142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.232159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.232175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.232203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.242103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.242204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.242233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.242249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.242262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.242292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.252109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.252210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.252237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.252257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.252271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.252302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.262175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.262262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.262287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.262302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.262315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.262343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.425 [2024-10-14 13:46:32.272178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.425 [2024-10-14 13:46:32.272259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.425 [2024-10-14 13:46:32.272284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.425 [2024-10-14 13:46:32.272299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.425 [2024-10-14 13:46:32.272312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.425 [2024-10-14 13:46:32.272341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.425 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.282191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.282279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.282306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.282321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.282335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.282365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.292215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.292307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.292334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.292349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.292362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.292393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.302277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.302388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.302413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.302427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.302441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.302470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.312256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.312337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.312362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.312375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.312388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.312417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.322300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.322390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.322415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.322430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.322443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.322471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.332305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.332435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.332460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.332475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.332489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.332517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.342332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.342422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.342447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.342467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.342481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.342510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.352410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.352493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.352517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.352532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.685 [2024-10-14 13:46:32.352546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.685 [2024-10-14 13:46:32.352574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.685 qpair failed and we were unable to recover it. 00:35:40.685 [2024-10-14 13:46:32.362461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.685 [2024-10-14 13:46:32.362547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.685 [2024-10-14 13:46:32.362572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.685 [2024-10-14 13:46:32.362587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.362600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.362628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.372451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.372535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.372560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.372575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.372588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.372617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.382454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.382551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.382575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.382590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.382602] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.382631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.392496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.392616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.392641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.392656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.392669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.392697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.402527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.402621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.402646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.402660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.402673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.402701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.412610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.412742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.412767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.412781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.412794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.412823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.422566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.422650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.422675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.422691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.422704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.422732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.432628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.432715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.432739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.432759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.432774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.432802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.442628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.442716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.442741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.442757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.442769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.442797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.452739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.452826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.452851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.452866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.452879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.452908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.462710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.462796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.462822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.462837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.462849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.462878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.686 [2024-10-14 13:46:32.472723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.686 [2024-10-14 13:46:32.472832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.686 [2024-10-14 13:46:32.472857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.686 [2024-10-14 13:46:32.472871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.686 [2024-10-14 13:46:32.472884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.686 [2024-10-14 13:46:32.472913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.686 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.482752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.482879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.482905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.482920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.482934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.482962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.492808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.492923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.492948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.492962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.492976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.493004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.502812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.502941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.502968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.502983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.502996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.503027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.512837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.512925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.512951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.512965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.512978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.513007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.522909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.523024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.523048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.523068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.523082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.523111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.687 [2024-10-14 13:46:32.532936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.687 [2024-10-14 13:46:32.533027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.687 [2024-10-14 13:46:32.533052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.687 [2024-10-14 13:46:32.533067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.687 [2024-10-14 13:46:32.533080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.687 [2024-10-14 13:46:32.533109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.687 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.542933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.543014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.543042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.543057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.543071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.948 [2024-10-14 13:46:32.543100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.948 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.552954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.553036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.553062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.553077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.553090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.948 [2024-10-14 13:46:32.553119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.948 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.563020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.563111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.563149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.563166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.563180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.948 [2024-10-14 13:46:32.563209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.948 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.573032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.573156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.573183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.573198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.573211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.948 [2024-10-14 13:46:32.573241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.948 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.583085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.583198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.583224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.583239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.583252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.948 [2024-10-14 13:46:32.583280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.948 qpair failed and we were unable to recover it. 00:35:40.948 [2024-10-14 13:46:32.593066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.948 [2024-10-14 13:46:32.593152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.948 [2024-10-14 13:46:32.593178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.948 [2024-10-14 13:46:32.593193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.948 [2024-10-14 13:46:32.593206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.593234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.603179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.603276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.603302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.603316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.603329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.603358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.613169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.613258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.613289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.613304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.613317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.613346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.623181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.623291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.623316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.623331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.623344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.623373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.633187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.633275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.633299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.633314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.633327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.633356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.643245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.643334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.643359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.643374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.643387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.643415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.653249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.653341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.653366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.653381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.653394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.653422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.663309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.663418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.663446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.663462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.663475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.663505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.673300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.673381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.673407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.673422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.673435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.673464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.683349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.683435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.683460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.683475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.683488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.683516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.693380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.693471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.693496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.693511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.693523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.693552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.703437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.703523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.703553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.703569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.703582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.703611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.713417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.713535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.713560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.713574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.713588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.713616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.723514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.723624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.723649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.723663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.949 [2024-10-14 13:46:32.723676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.949 [2024-10-14 13:46:32.723704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.949 qpair failed and we were unable to recover it. 00:35:40.949 [2024-10-14 13:46:32.733568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.949 [2024-10-14 13:46:32.733664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.949 [2024-10-14 13:46:32.733689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.949 [2024-10-14 13:46:32.733704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.733717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.733745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.743550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.743667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.743693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.743708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.743722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.743766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.753559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.753639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.753665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.753679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.753692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.753721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.763581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.763701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.763726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.763742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.763755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.763784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.773662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.773753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.773778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.773793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.773806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.773835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.783613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.783700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.783726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.783741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.783754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.783782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:40.950 [2024-10-14 13:46:32.793693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:40.950 [2024-10-14 13:46:32.793815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:40.950 [2024-10-14 13:46:32.793845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:40.950 [2024-10-14 13:46:32.793861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:40.950 [2024-10-14 13:46:32.793874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:40.950 [2024-10-14 13:46:32.793903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.950 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.803699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.803798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.803826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.803842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.803855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.803884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.813775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.813867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.813894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.813910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.813924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.813953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.823777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.823863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.823889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.823904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.823917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.823947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.833791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.833915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.833940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.833954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.833967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.834001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.843833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.843963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.843988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.844002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.844015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.844044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.853868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.853977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.854005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.854022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.854035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.854065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.863881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.863970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.863996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.864011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.864024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.864052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.873888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.873975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.874001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.874016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.874028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.874057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.883899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.883990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.249 [2024-10-14 13:46:32.884021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.249 [2024-10-14 13:46:32.884037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.249 [2024-10-14 13:46:32.884049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.249 [2024-10-14 13:46:32.884078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.249 qpair failed and we were unable to recover it. 00:35:41.249 [2024-10-14 13:46:32.893957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.249 [2024-10-14 13:46:32.894060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.894085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.894100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.894113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.894149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.903951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.904041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.904067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.904082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.904095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.904124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.913975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.914063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.914088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.914104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.914117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.914152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.924036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.924125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.924158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.924173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.924186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.924219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.934084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.934201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.934229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.934246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.934260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.934289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.944112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.944231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.944257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.944271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.944284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.944313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.954096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.954211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.954237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.954252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.954265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.954294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.964172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.964301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.964327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.964341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.964354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.964382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.974162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.974285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.974316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.974332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.974345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.974373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.984186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.984287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.984312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.984327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.984339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.984367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:32.994216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:32.994297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:32.994322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:32.994337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:32.994350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:32.994378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:33.004253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:33.004343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:33.004367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:33.004383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:33.004396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:33.004424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:33.014272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:33.014369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:33.014395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:33.014409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:33.014422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:33.014456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:33.024320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:33.024437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:33.024462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:33.024477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.250 [2024-10-14 13:46:33.024490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.250 [2024-10-14 13:46:33.024518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.250 qpair failed and we were unable to recover it. 00:35:41.250 [2024-10-14 13:46:33.034318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.250 [2024-10-14 13:46:33.034446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.250 [2024-10-14 13:46:33.034471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.250 [2024-10-14 13:46:33.034486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.251 [2024-10-14 13:46:33.034499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.251 [2024-10-14 13:46:33.034527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.251 qpair failed and we were unable to recover it. 00:35:41.251 [2024-10-14 13:46:33.044382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.251 [2024-10-14 13:46:33.044472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.251 [2024-10-14 13:46:33.044497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.251 [2024-10-14 13:46:33.044512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.251 [2024-10-14 13:46:33.044525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.251 [2024-10-14 13:46:33.044553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.251 qpair failed and we were unable to recover it. 00:35:41.251 [2024-10-14 13:46:33.054421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.251 [2024-10-14 13:46:33.054503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.251 [2024-10-14 13:46:33.054529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.251 [2024-10-14 13:46:33.054544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.251 [2024-10-14 13:46:33.054557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.251 [2024-10-14 13:46:33.054586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.251 qpair failed and we were unable to recover it. 00:35:41.251 [2024-10-14 13:46:33.064410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.251 [2024-10-14 13:46:33.064493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.251 [2024-10-14 13:46:33.064524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.251 [2024-10-14 13:46:33.064539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.251 [2024-10-14 13:46:33.064552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.251 [2024-10-14 13:46:33.064580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.251 qpair failed and we were unable to recover it. 00:35:41.251 [2024-10-14 13:46:33.074465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.251 [2024-10-14 13:46:33.074554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.251 [2024-10-14 13:46:33.074581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.251 [2024-10-14 13:46:33.074597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.251 [2024-10-14 13:46:33.074610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.251 [2024-10-14 13:46:33.074641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.251 qpair failed and we were unable to recover it. 00:35:41.531 [2024-10-14 13:46:33.084555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.531 [2024-10-14 13:46:33.084658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.531 [2024-10-14 13:46:33.084686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.531 [2024-10-14 13:46:33.084702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.531 [2024-10-14 13:46:33.084715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.531 [2024-10-14 13:46:33.084746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.531 qpair failed and we were unable to recover it. 00:35:41.531 [2024-10-14 13:46:33.094514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.531 [2024-10-14 13:46:33.094604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.531 [2024-10-14 13:46:33.094629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.531 [2024-10-14 13:46:33.094644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.531 [2024-10-14 13:46:33.094657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.531 [2024-10-14 13:46:33.094686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.531 qpair failed and we were unable to recover it. 00:35:41.531 [2024-10-14 13:46:33.104577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.531 [2024-10-14 13:46:33.104664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.531 [2024-10-14 13:46:33.104692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.531 [2024-10-14 13:46:33.104708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.531 [2024-10-14 13:46:33.104720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.531 [2024-10-14 13:46:33.104755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.531 qpair failed and we were unable to recover it. 00:35:41.531 [2024-10-14 13:46:33.114605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.531 [2024-10-14 13:46:33.114690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.531 [2024-10-14 13:46:33.114717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.531 [2024-10-14 13:46:33.114733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.531 [2024-10-14 13:46:33.114746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.531 [2024-10-14 13:46:33.114776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.531 qpair failed and we were unable to recover it. 00:35:41.531 [2024-10-14 13:46:33.124621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.531 [2024-10-14 13:46:33.124737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.531 [2024-10-14 13:46:33.124763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.531 [2024-10-14 13:46:33.124778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.531 [2024-10-14 13:46:33.124791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.124820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.134604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.134686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.134712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.134728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.134741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.134769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.144653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.144733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.144758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.144774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.144786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.144814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.154658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.154778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.154811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.154826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.154839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.154867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.164714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.164805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.164830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.164845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.164857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.164886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.174716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.174807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.174833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.174847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.174860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.174888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.184787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.184887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.184912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.184927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.184939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.184968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.194806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.194894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.194919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.194934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.194952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.194982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.204826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.204927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.204955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.204971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.204984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.205013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.214845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.214936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.214962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.214976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.214988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.215017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.224936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.225024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.225049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.225064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.225077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.225105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.234919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.235006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.235030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.235045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.235058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.235086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.244959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.245049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.245079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.245095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.245107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.245143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.254967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.255093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.255119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.255144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.255158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.255187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.264994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.532 [2024-10-14 13:46:33.265080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.532 [2024-10-14 13:46:33.265105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.532 [2024-10-14 13:46:33.265119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.532 [2024-10-14 13:46:33.265140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.532 [2024-10-14 13:46:33.265170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.532 qpair failed and we were unable to recover it. 00:35:41.532 [2024-10-14 13:46:33.275079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.275224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.275252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.275267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.275279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.275308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.285044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.285140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.285165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.285179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.285197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.285226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.295079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.295171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.295195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.295211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.295223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.295252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.305100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.305227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.305254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.305269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.305282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.305310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.315193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.315276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.315300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.315314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.315327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.315355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.325201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.325292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.325317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.325331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.325345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.325374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.335190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.335286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.335310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.335325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.335338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.335366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.345252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.345339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.345364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.345379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.345392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.345421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.355236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.355321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.355346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.355360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.355373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.355401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.365293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.365391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.365415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.365439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.365451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.365480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.533 [2024-10-14 13:46:33.375377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.533 [2024-10-14 13:46:33.375480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.533 [2024-10-14 13:46:33.375506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.533 [2024-10-14 13:46:33.375522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.533 [2024-10-14 13:46:33.375540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.533 [2024-10-14 13:46:33.375569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.533 qpair failed and we were unable to recover it. 00:35:41.792 [2024-10-14 13:46:33.385351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.792 [2024-10-14 13:46:33.385441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.792 [2024-10-14 13:46:33.385469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.792 [2024-10-14 13:46:33.385484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.792 [2024-10-14 13:46:33.385497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.792 [2024-10-14 13:46:33.385528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.792 qpair failed and we were unable to recover it. 00:35:41.792 [2024-10-14 13:46:33.395363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.792 [2024-10-14 13:46:33.395452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.792 [2024-10-14 13:46:33.395479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.395494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.395507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.395536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.405412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.405511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.405537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.405553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.405565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.405594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.415452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.415548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.415573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.415587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.415601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.415630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.425454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.425544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.425569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.425584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.425597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.425625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.435529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.435618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.435643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.435658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.435670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.435698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.445541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.445664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.445695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.445711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.445724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.445753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.455589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.455691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.455717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.455732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.455745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.455774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.465562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.465678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.465705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.465719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.465737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.465766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.475599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.475694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.475719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.475733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.475746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.475774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.485610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.485698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.485722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.485736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.485749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.485777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.495620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.495720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.495747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.495762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.495775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.495803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.505714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.505816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.505847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.505864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.505877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.505907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.515715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.515812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.515837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.515852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.515864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.515893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.525723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.525814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.525838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.793 [2024-10-14 13:46:33.525852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.793 [2024-10-14 13:46:33.525865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.793 [2024-10-14 13:46:33.525892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.793 qpair failed and we were unable to recover it. 00:35:41.793 [2024-10-14 13:46:33.535787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.793 [2024-10-14 13:46:33.535884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.793 [2024-10-14 13:46:33.535909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.535923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.535935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.535964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.545829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.545934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.545961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.545976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.545989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.546019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.555856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.555945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.555970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.555985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.556003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.556032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.565871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.565967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.565993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.566008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.566020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.566048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.575885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.575976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.576004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.576019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.576031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.576059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.585882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.585972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.585997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.586011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.586024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.586052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.595905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.595983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.596007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.596021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.596034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.596063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.605997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.606089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.606114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.606136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.606151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.606180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.615981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.616071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.616096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.616110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.616123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.616161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.626031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.626158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.626185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.626201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.626213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.626241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.636025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.636141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.636168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.636182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.636195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.636223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:41.794 [2024-10-14 13:46:33.646124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:41.794 [2024-10-14 13:46:33.646235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:41.794 [2024-10-14 13:46:33.646262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:41.794 [2024-10-14 13:46:33.646278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:41.794 [2024-10-14 13:46:33.646298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:41.794 [2024-10-14 13:46:33.646343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.794 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.656148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.656252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.656280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.656296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.656309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.656339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.666141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.666235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.666262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.666277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.666290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.666319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.676235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.676332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.676358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.676373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.676385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.676414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.686246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.686359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.686386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.686401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.686414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.686442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.696235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.696331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.696358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.696373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.696385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.696414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.706257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.706352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.706378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.706393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.706405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.706433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.716412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.716500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.716529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.716545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.716558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.716588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.054 [2024-10-14 13:46:33.726317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.054 [2024-10-14 13:46:33.726407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.054 [2024-10-14 13:46:33.726432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.054 [2024-10-14 13:46:33.726446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.054 [2024-10-14 13:46:33.726459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.054 [2024-10-14 13:46:33.726487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.054 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.736326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.736424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.736450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.736471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.736484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.736512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.746348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.746434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.746459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.746474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.746487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.746516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.756428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.756517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.756541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.756556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.756569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.756597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.766420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.766512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.766536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.766550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.766563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.766590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.776472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.776564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.776589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.776603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.776616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.776644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.786505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.786596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.786622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.786637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.786649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.786677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.796513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.796605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.796629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.796643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.796656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.796685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.806628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.806714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.806738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.806752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.806765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.806793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.816585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.816677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.816701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.816716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.816728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.816757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.826611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.826744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.826768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.826789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.826803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.826832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.836608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.836700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.836724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.836739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.836752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.836780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.846698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.846796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.846820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.846835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.846848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.846877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.856720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.856840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.856866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.856881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.856894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.856922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.866723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.866808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.055 [2024-10-14 13:46:33.866832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.055 [2024-10-14 13:46:33.866847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.055 [2024-10-14 13:46:33.866859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.055 [2024-10-14 13:46:33.866889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.055 qpair failed and we were unable to recover it. 00:35:42.055 [2024-10-14 13:46:33.876751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.055 [2024-10-14 13:46:33.876854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.056 [2024-10-14 13:46:33.876879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.056 [2024-10-14 13:46:33.876893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.056 [2024-10-14 13:46:33.876906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.056 [2024-10-14 13:46:33.876934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.056 qpair failed and we were unable to recover it. 00:35:42.056 [2024-10-14 13:46:33.886792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.056 [2024-10-14 13:46:33.886888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.056 [2024-10-14 13:46:33.886914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.056 [2024-10-14 13:46:33.886930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.056 [2024-10-14 13:46:33.886943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.056 [2024-10-14 13:46:33.886972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.056 qpair failed and we were unable to recover it. 00:35:42.056 [2024-10-14 13:46:33.896788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.056 [2024-10-14 13:46:33.896874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.056 [2024-10-14 13:46:33.896899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.056 [2024-10-14 13:46:33.896913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.056 [2024-10-14 13:46:33.896925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.056 [2024-10-14 13:46:33.896954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.056 qpair failed and we were unable to recover it. 00:35:42.056 [2024-10-14 13:46:33.906813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.056 [2024-10-14 13:46:33.906931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.056 [2024-10-14 13:46:33.906960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.056 [2024-10-14 13:46:33.906976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.056 [2024-10-14 13:46:33.906989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.056 [2024-10-14 13:46:33.907018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.056 qpair failed and we were unable to recover it. 00:35:42.315 [2024-10-14 13:46:33.916854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.315 [2024-10-14 13:46:33.916946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.315 [2024-10-14 13:46:33.916975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.315 [2024-10-14 13:46:33.916996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.315 [2024-10-14 13:46:33.917010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.315 [2024-10-14 13:46:33.917039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.315 qpair failed and we were unable to recover it. 00:35:42.315 [2024-10-14 13:46:33.926877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.315 [2024-10-14 13:46:33.926974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.315 [2024-10-14 13:46:33.927002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.315 [2024-10-14 13:46:33.927017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.315 [2024-10-14 13:46:33.927030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.315 [2024-10-14 13:46:33.927059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.315 qpair failed and we were unable to recover it. 00:35:42.315 [2024-10-14 13:46:33.936905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.315 [2024-10-14 13:46:33.936996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.315 [2024-10-14 13:46:33.937020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.315 [2024-10-14 13:46:33.937035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.315 [2024-10-14 13:46:33.937048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.315 [2024-10-14 13:46:33.937077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.315 qpair failed and we were unable to recover it. 00:35:42.315 [2024-10-14 13:46:33.946905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.315 [2024-10-14 13:46:33.946988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.315 [2024-10-14 13:46:33.947013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.315 [2024-10-14 13:46:33.947028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.315 [2024-10-14 13:46:33.947041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.315 [2024-10-14 13:46:33.947069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:33.956946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:33.957029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:33.957054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:33.957069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:33.957081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:33.957109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:33.967030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:33.967133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:33.967158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:33.967173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:33.967186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:33.967214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:33.977006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:33.977099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:33.977123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:33.977147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:33.977160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:33.977189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:33.987022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:33.987114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:33.987194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:33.987210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:33.987231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:33.987262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:33.997074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:33.997176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:33.997203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:33.997218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:33.997230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:33.997259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.007143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.007236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.007261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.007284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.007298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.007327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.017145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.017249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.017276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.017290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.017304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.017333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.027163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.027251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.027276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.027291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.027304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.027332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.037178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.037267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.037292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.037306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.037319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.037349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.047233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.047324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.047348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.047363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.047376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.047404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.057244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.057358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.057385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.057400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.057413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.057441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.067301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.067389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.067413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.067428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.067441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.067469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.077316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.077401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.077425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.077440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.077452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.077480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.316 [2024-10-14 13:46:34.087368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.316 [2024-10-14 13:46:34.087457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.316 [2024-10-14 13:46:34.087482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.316 [2024-10-14 13:46:34.087497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.316 [2024-10-14 13:46:34.087509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.316 [2024-10-14 13:46:34.087538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.316 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.097395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.097488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.097513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.097534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.097548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.097576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.107438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.107562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.107588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.107603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.107616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.107644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.117423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.117513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.117538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.117552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.117564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.117593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.127511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.127606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.127630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.127645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.127658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.127686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.137539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.137633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.137658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.137672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.137685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.137714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.147523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.147608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.147633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.147648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.147660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.147689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.157525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.157614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.157639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.157653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.157666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.157694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.317 [2024-10-14 13:46:34.167575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.317 [2024-10-14 13:46:34.167670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.317 [2024-10-14 13:46:34.167699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.317 [2024-10-14 13:46:34.167714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.317 [2024-10-14 13:46:34.167727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.317 [2024-10-14 13:46:34.167759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.317 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.177678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.177780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.177809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.177825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.177837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.177867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.187623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.187713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.187745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.187761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.187774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.187804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.197707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.197827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.197853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.197868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.197881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.197908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.207733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.207877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.207908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.207924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.207937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.207966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.217708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.217794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.217819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.217833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.217846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.217874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.227756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.227891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.227917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.227932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.227944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.227973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.237816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.237905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.237930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.237945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.237958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.237986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.247860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.248005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.248034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.248051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.248064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.248094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.257842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.257935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.257960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.257976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.257989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.258017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.267896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.268008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.268034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.577 [2024-10-14 13:46:34.268048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.577 [2024-10-14 13:46:34.268061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.577 [2024-10-14 13:46:34.268088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.577 qpair failed and we were unable to recover it. 00:35:42.577 [2024-10-14 13:46:34.277932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.577 [2024-10-14 13:46:34.278017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.577 [2024-10-14 13:46:34.278047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.278062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.278076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.278104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.287968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.288056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.288082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.288096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.288110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.288145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.297977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.298070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.298095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.298109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.298122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.298163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.308016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.308104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.308137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.308154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.308167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.308195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.318075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.318170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.318196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.318211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.318223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.318252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.328114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.328256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.328281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.328296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.328309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.328338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.338085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.338181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.338207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.338221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.338234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.338263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.348164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.348269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.348293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.348308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.348321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.348349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.358154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.358251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.358275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.358289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.358302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.358331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.368202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.368302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.368331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.368346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.368360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.368388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.378196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.378285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.378311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.378325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.378339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.378368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.388265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.388393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.388419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.388435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.388448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.388476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.398254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.398385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.398411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.398426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.398439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.398468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.408279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.408366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.408391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.408406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.408419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.578 [2024-10-14 13:46:34.408452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.578 qpair failed and we were unable to recover it. 00:35:42.578 [2024-10-14 13:46:34.418304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.578 [2024-10-14 13:46:34.418393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.578 [2024-10-14 13:46:34.418417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.578 [2024-10-14 13:46:34.418431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.578 [2024-10-14 13:46:34.418445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.579 [2024-10-14 13:46:34.418474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.579 qpair failed and we were unable to recover it. 00:35:42.579 [2024-10-14 13:46:34.428416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.579 [2024-10-14 13:46:34.428506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.579 [2024-10-14 13:46:34.428541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.579 [2024-10-14 13:46:34.428569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.579 [2024-10-14 13:46:34.428594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.579 [2024-10-14 13:46:34.428631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.579 qpair failed and we were unable to recover it. 00:35:42.839 [2024-10-14 13:46:34.438341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.839 [2024-10-14 13:46:34.438441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.839 [2024-10-14 13:46:34.438468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.839 [2024-10-14 13:46:34.438483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.839 [2024-10-14 13:46:34.438496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.839 [2024-10-14 13:46:34.438526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.839 qpair failed and we were unable to recover it. 00:35:42.839 [2024-10-14 13:46:34.448417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.839 [2024-10-14 13:46:34.448509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.839 [2024-10-14 13:46:34.448534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.839 [2024-10-14 13:46:34.448548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.839 [2024-10-14 13:46:34.448562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.839 [2024-10-14 13:46:34.448591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.839 qpair failed and we were unable to recover it. 00:35:42.839 [2024-10-14 13:46:34.458482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.839 [2024-10-14 13:46:34.458575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.839 [2024-10-14 13:46:34.458608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.839 [2024-10-14 13:46:34.458626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.839 [2024-10-14 13:46:34.458639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.839 [2024-10-14 13:46:34.458668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.839 qpair failed and we were unable to recover it. 00:35:42.839 [2024-10-14 13:46:34.468431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.839 [2024-10-14 13:46:34.468512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.839 [2024-10-14 13:46:34.468537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.839 [2024-10-14 13:46:34.468552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.839 [2024-10-14 13:46:34.468565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.468593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.478568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.478658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.478684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.478698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.478712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.478740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.488555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.488648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.488674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.488688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.488701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.488730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.498595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.498682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.498707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.498722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.498734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.498768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.508559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.508649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.508674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.508688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.508701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.508729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.518547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.518631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.518655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.518670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.518683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.518711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.528631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.528724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.528750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.528764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.528776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.528804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.538644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.538732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.538757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.538772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.538785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.538814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.548658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.548775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.548806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.548821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.548835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.548863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.558776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.558857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.558881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.558896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.558908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.558937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.568760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.568875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.568902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.568918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.568930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.568960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.578863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.578960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.578985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.579001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.579014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.579043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.588794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.588884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.588908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.588922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.588936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.588970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.598815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.598902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.598926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.598941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.598953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.598981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.840 [2024-10-14 13:46:34.608871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.840 [2024-10-14 13:46:34.608965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.840 [2024-10-14 13:46:34.608993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.840 [2024-10-14 13:46:34.609010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.840 [2024-10-14 13:46:34.609022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.840 [2024-10-14 13:46:34.609051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.840 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.618913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.619007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.619033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.619048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.619061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.619090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.628962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.629055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.629080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.629095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.629108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.629144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.638910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.638994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.639024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.639040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.639053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.639082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.649021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.649124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.649157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.649173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.649186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.649214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.659001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.659089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.659114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.659137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.659152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.659181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.669038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.669147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.669173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.669187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.669200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.669229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.679061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.679144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.679174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.679188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.679200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.679236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:42.841 [2024-10-14 13:46:34.689101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:42.841 [2024-10-14 13:46:34.689207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:42.841 [2024-10-14 13:46:34.689234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:42.841 [2024-10-14 13:46:34.689249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:42.841 [2024-10-14 13:46:34.689262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:42.841 [2024-10-14 13:46:34.689291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.841 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.699152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.699247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.699275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.699290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.699304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.699334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.709125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.709226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.709259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.709274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.709286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.709316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.719213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.719325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.719350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.719365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.719378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.719406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.729188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.729284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.729315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.729330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.729343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.729372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.739246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.739359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.739384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.739398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.739412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.739440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.749254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.749374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.749399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.749414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.749427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.749455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.759264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.759389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.759415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.759430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.759442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.759471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.769329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.769454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.769480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.769495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.101 [2024-10-14 13:46:34.769508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.101 [2024-10-14 13:46:34.769542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.101 qpair failed and we were unable to recover it. 00:35:43.101 [2024-10-14 13:46:34.779414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.101 [2024-10-14 13:46:34.779509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.101 [2024-10-14 13:46:34.779534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.101 [2024-10-14 13:46:34.779548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.779561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.779590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.789373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.789458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.789484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.789499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.789511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.789540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.799415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.799505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.799531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.799546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.799558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.799586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.809468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.809575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.809600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.809615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.809628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.809656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.819448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.819536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.819566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.819582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.819594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.819623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.829466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.829548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.829573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.829588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.829601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.829628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.839513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.839639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.839664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.839678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.839691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.839720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.849524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.849616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.849641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.849655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.849669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.849697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.859581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.859679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.859704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.859718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.859737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.859767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.869576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.869703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.869728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.869743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.869756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.869784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.879614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.879747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.879772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.879787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.879800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.879828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.889661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.889746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.889771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.889785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.889798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.889826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.899714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.899830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.899854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.102 [2024-10-14 13:46:34.899869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.102 [2024-10-14 13:46:34.899882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.102 [2024-10-14 13:46:34.899910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.102 qpair failed and we were unable to recover it. 00:35:43.102 [2024-10-14 13:46:34.909709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.102 [2024-10-14 13:46:34.909788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.102 [2024-10-14 13:46:34.909819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.103 [2024-10-14 13:46:34.909834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.103 [2024-10-14 13:46:34.909847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.103 [2024-10-14 13:46:34.909876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.103 qpair failed and we were unable to recover it. 00:35:43.103 [2024-10-14 13:46:34.919722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.103 [2024-10-14 13:46:34.919806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.103 [2024-10-14 13:46:34.919831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.103 [2024-10-14 13:46:34.919846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.103 [2024-10-14 13:46:34.919859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.103 [2024-10-14 13:46:34.919887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.103 qpair failed and we were unable to recover it. 00:35:43.103 [2024-10-14 13:46:34.929810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.103 [2024-10-14 13:46:34.929904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.103 [2024-10-14 13:46:34.929929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.103 [2024-10-14 13:46:34.929944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.103 [2024-10-14 13:46:34.929957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.103 [2024-10-14 13:46:34.929985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.103 qpair failed and we were unable to recover it. 00:35:43.103 [2024-10-14 13:46:34.939762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.103 [2024-10-14 13:46:34.939844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.103 [2024-10-14 13:46:34.939870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.103 [2024-10-14 13:46:34.939884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.103 [2024-10-14 13:46:34.939896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.103 [2024-10-14 13:46:34.939926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.103 qpair failed and we were unable to recover it. 00:35:43.103 [2024-10-14 13:46:34.949799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.103 [2024-10-14 13:46:34.949884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.103 [2024-10-14 13:46:34.949910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.103 [2024-10-14 13:46:34.949926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.103 [2024-10-14 13:46:34.949944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.103 [2024-10-14 13:46:34.949973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.103 qpair failed and we were unable to recover it. 00:35:43.361 [2024-10-14 13:46:34.959833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.361 [2024-10-14 13:46:34.959923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.361 [2024-10-14 13:46:34.959950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.361 [2024-10-14 13:46:34.959966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.361 [2024-10-14 13:46:34.959979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.361 [2024-10-14 13:46:34.960009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.361 qpair failed and we were unable to recover it. 00:35:43.361 [2024-10-14 13:46:34.969882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.361 [2024-10-14 13:46:34.969974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.361 [2024-10-14 13:46:34.970001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.361 [2024-10-14 13:46:34.970016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.361 [2024-10-14 13:46:34.970029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.361 [2024-10-14 13:46:34.970058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.361 qpair failed and we were unable to recover it. 00:35:43.361 [2024-10-14 13:46:34.979887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.361 [2024-10-14 13:46:34.979970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.361 [2024-10-14 13:46:34.979996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.361 [2024-10-14 13:46:34.980011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.361 [2024-10-14 13:46:34.980024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.361 [2024-10-14 13:46:34.980052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.361 qpair failed and we were unable to recover it. 00:35:43.361 [2024-10-14 13:46:34.989925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.361 [2024-10-14 13:46:34.990051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.361 [2024-10-14 13:46:34.990076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.361 [2024-10-14 13:46:34.990091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.361 [2024-10-14 13:46:34.990104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.362 [2024-10-14 13:46:34.990142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:34.999937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.000034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.000063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.000079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.000092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.362 [2024-10-14 13:46:35.000121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.010085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.010216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.010242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.010257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.010270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5c9340 00:35:43.362 [2024-10-14 13:46:35.010299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.019994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.020085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.020118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.020143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.020157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9994000b90 00:35:43.362 [2024-10-14 13:46:35.020189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.030077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.030179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.030206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.030221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.030233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9994000b90 00:35:43.362 [2024-10-14 13:46:35.030264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.040142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.040230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.040262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.040278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.040297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f999c000b90 00:35:43.362 [2024-10-14 13:46:35.040332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.050113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.050234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.050260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.050274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.050288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f999c000b90 00:35:43.362 [2024-10-14 13:46:35.050318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.050450] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:43.362 A controller has encountered a failure and is being reset. 00:35:43.362 [2024-10-14 13:46:35.060113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.060212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.060244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.060261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.060273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9990000b90 00:35:43.362 [2024-10-14 13:46:35.060306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 [2024-10-14 13:46:35.070143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:43.362 [2024-10-14 13:46:35.070230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:43.362 [2024-10-14 13:46:35.070257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:43.362 [2024-10-14 13:46:35.070272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:43.362 [2024-10-14 13:46:35.070284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9990000b90 00:35:43.362 [2024-10-14 13:46:35.070315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:43.362 qpair failed and we were unable to recover it. 00:35:43.362 Controller properly reset. 00:35:43.362 Initializing NVMe Controllers 00:35:43.362 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:43.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:43.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:43.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:43.362 Initialization complete. Launching workers. 00:35:43.362 Starting thread on core 1 00:35:43.362 Starting thread on core 2 00:35:43.362 Starting thread on core 3 00:35:43.362 Starting thread on core 0 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:43.620 00:35:43.620 real 0m10.821s 00:35:43.620 user 0m19.569s 00:35:43.620 sys 0m5.137s 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.620 ************************************ 00:35:43.620 END TEST nvmf_target_disconnect_tc2 00:35:43.620 ************************************ 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.620 rmmod nvme_tcp 00:35:43.620 rmmod nvme_fabrics 00:35:43.620 rmmod nvme_keyring 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 400850 ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 400850 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 400850 ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 400850 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 400850 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 400850' 00:35:43.620 killing process with pid 400850 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 400850 00:35:43.620 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 400850 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.880 13:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.787 00:35:45.787 real 0m15.747s 00:35:45.787 user 0m46.266s 00:35:45.787 sys 0m7.150s 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:45.787 ************************************ 00:35:45.787 END TEST nvmf_target_disconnect 00:35:45.787 ************************************ 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:45.787 00:35:45.787 real 6m43.322s 00:35:45.787 user 17m12.648s 00:35:45.787 sys 1m27.416s 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:45.787 13:46:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.787 ************************************ 00:35:45.787 END TEST nvmf_host 00:35:45.787 ************************************ 00:35:45.787 13:46:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:45.787 13:46:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:45.787 13:46:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:45.787 13:46:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:45.787 13:46:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:45.787 13:46:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:46.046 ************************************ 00:35:46.046 START TEST nvmf_target_core_interrupt_mode 00:35:46.046 ************************************ 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:46.046 * Looking for test storage... 00:35:46.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.046 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:46.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.047 --rc genhtml_branch_coverage=1 00:35:46.047 --rc genhtml_function_coverage=1 00:35:46.047 --rc genhtml_legend=1 00:35:46.047 --rc geninfo_all_blocks=1 00:35:46.047 --rc geninfo_unexecuted_blocks=1 00:35:46.047 00:35:46.047 ' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:46.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.047 --rc genhtml_branch_coverage=1 00:35:46.047 --rc genhtml_function_coverage=1 00:35:46.047 --rc genhtml_legend=1 00:35:46.047 --rc geninfo_all_blocks=1 00:35:46.047 --rc geninfo_unexecuted_blocks=1 00:35:46.047 00:35:46.047 ' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:46.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.047 --rc genhtml_branch_coverage=1 00:35:46.047 --rc genhtml_function_coverage=1 00:35:46.047 --rc genhtml_legend=1 00:35:46.047 --rc geninfo_all_blocks=1 00:35:46.047 --rc geninfo_unexecuted_blocks=1 00:35:46.047 00:35:46.047 ' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:46.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.047 --rc genhtml_branch_coverage=1 00:35:46.047 --rc genhtml_function_coverage=1 00:35:46.047 --rc genhtml_legend=1 00:35:46.047 --rc geninfo_all_blocks=1 00:35:46.047 --rc geninfo_unexecuted_blocks=1 00:35:46.047 00:35:46.047 ' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:46.047 ************************************ 00:35:46.047 START TEST nvmf_abort 00:35:46.047 ************************************ 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:46.047 * Looking for test storage... 00:35:46.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:35:46.047 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.308 --rc genhtml_branch_coverage=1 00:35:46.308 --rc genhtml_function_coverage=1 00:35:46.308 --rc genhtml_legend=1 00:35:46.308 --rc geninfo_all_blocks=1 00:35:46.308 --rc geninfo_unexecuted_blocks=1 00:35:46.308 00:35:46.308 ' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.308 --rc genhtml_branch_coverage=1 00:35:46.308 --rc genhtml_function_coverage=1 00:35:46.308 --rc genhtml_legend=1 00:35:46.308 --rc geninfo_all_blocks=1 00:35:46.308 --rc geninfo_unexecuted_blocks=1 00:35:46.308 00:35:46.308 ' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.308 --rc genhtml_branch_coverage=1 00:35:46.308 --rc genhtml_function_coverage=1 00:35:46.308 --rc genhtml_legend=1 00:35:46.308 --rc geninfo_all_blocks=1 00:35:46.308 --rc geninfo_unexecuted_blocks=1 00:35:46.308 00:35:46.308 ' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:46.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.308 --rc genhtml_branch_coverage=1 00:35:46.308 --rc genhtml_function_coverage=1 00:35:46.308 --rc genhtml_legend=1 00:35:46.308 --rc geninfo_all_blocks=1 00:35:46.308 --rc geninfo_unexecuted_blocks=1 00:35:46.308 00:35:46.308 ' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.308 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.309 13:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.309 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:46.309 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:46.309 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:46.309 13:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:48.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:48.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:48.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:48.215 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:48.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:48.216 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:48.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:48.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:35:48.474 00:35:48.474 --- 10.0.0.2 ping statistics --- 00:35:48.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.474 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:48.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:48.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:35:48.474 00:35:48.474 --- 10.0.0.1 ping statistics --- 00:35:48.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.474 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=403609 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 403609 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 403609 ']' 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:48.474 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.474 [2024-10-14 13:46:40.246715] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:48.474 [2024-10-14 13:46:40.247916] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:48.474 [2024-10-14 13:46:40.247987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.474 [2024-10-14 13:46:40.317085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:48.730 [2024-10-14 13:46:40.367386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.730 [2024-10-14 13:46:40.367436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.730 [2024-10-14 13:46:40.367450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.730 [2024-10-14 13:46:40.367461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.730 [2024-10-14 13:46:40.367471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.730 [2024-10-14 13:46:40.368840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.730 [2024-10-14 13:46:40.368905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:48.730 [2024-10-14 13:46:40.368908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.730 [2024-10-14 13:46:40.465293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:48.731 [2024-10-14 13:46:40.465502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:48.731 [2024-10-14 13:46:40.465512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:48.731 [2024-10-14 13:46:40.465803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.731 [2024-10-14 13:46:40.517567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.731 Malloc0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.731 Delay0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.731 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.989 [2024-10-14 13:46:40.593789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.989 13:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:48.989 [2024-10-14 13:46:40.690460] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:51.522 Initializing NVMe Controllers 00:35:51.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:51.522 controller IO queue size 128 less than required 00:35:51.522 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:51.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:51.522 Initialization complete. Launching workers. 00:35:51.522 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28548 00:35:51.522 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28605, failed to submit 66 00:35:51.522 success 28548, unsuccessful 57, failed 0 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:51.522 rmmod nvme_tcp 00:35:51.522 rmmod nvme_fabrics 00:35:51.522 rmmod nvme_keyring 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 403609 ']' 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 403609 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 403609 ']' 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 403609 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403609 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403609' 00:35:51.522 killing process with pid 403609 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 403609 00:35:51.522 13:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 403609 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:51.522 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:51.523 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.523 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.523 13:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:53.427 00:35:53.427 real 0m7.274s 00:35:53.427 user 0m9.165s 00:35:53.427 sys 0m2.944s 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:53.427 ************************************ 00:35:53.427 END TEST nvmf_abort 00:35:53.427 ************************************ 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:53.427 ************************************ 00:35:53.427 START TEST nvmf_ns_hotplug_stress 00:35:53.427 ************************************ 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:53.427 * Looking for test storage... 00:35:53.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:35:53.427 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:53.686 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:53.686 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.686 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.686 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.686 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:53.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.687 --rc genhtml_branch_coverage=1 00:35:53.687 --rc genhtml_function_coverage=1 00:35:53.687 --rc genhtml_legend=1 00:35:53.687 --rc geninfo_all_blocks=1 00:35:53.687 --rc geninfo_unexecuted_blocks=1 00:35:53.687 00:35:53.687 ' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:53.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.687 --rc genhtml_branch_coverage=1 00:35:53.687 --rc genhtml_function_coverage=1 00:35:53.687 --rc genhtml_legend=1 00:35:53.687 --rc geninfo_all_blocks=1 00:35:53.687 --rc geninfo_unexecuted_blocks=1 00:35:53.687 00:35:53.687 ' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:53.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.687 --rc genhtml_branch_coverage=1 00:35:53.687 --rc genhtml_function_coverage=1 00:35:53.687 --rc genhtml_legend=1 00:35:53.687 --rc geninfo_all_blocks=1 00:35:53.687 --rc geninfo_unexecuted_blocks=1 00:35:53.687 00:35:53.687 ' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:53.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.687 --rc genhtml_branch_coverage=1 00:35:53.687 --rc genhtml_function_coverage=1 00:35:53.687 --rc genhtml_legend=1 00:35:53.687 --rc geninfo_all_blocks=1 00:35:53.687 --rc geninfo_unexecuted_blocks=1 00:35:53.687 00:35:53.687 ' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.687 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.688 13:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:55.589 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:55.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:55.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:55.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:55.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:55.590 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:55.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:55.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:35:55.848 00:35:55.848 --- 10.0.0.2 ping statistics --- 00:35:55.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.848 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:55.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:55.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:35:55.848 00:35:55.848 --- 10.0.0.1 ping statistics --- 00:35:55.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:55.848 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=405887 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 405887 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 405887 ']' 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.848 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:55.849 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.849 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:55.849 13:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:56.107 [2024-10-14 13:46:47.731313] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:56.107 [2024-10-14 13:46:47.732442] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:35:56.107 [2024-10-14 13:46:47.732511] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.107 [2024-10-14 13:46:47.797505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:56.107 [2024-10-14 13:46:47.840854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.107 [2024-10-14 13:46:47.840912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.107 [2024-10-14 13:46:47.840925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.107 [2024-10-14 13:46:47.840936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.107 [2024-10-14 13:46:47.840945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.107 [2024-10-14 13:46:47.842344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:56.107 [2024-10-14 13:46:47.842413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.107 [2024-10-14 13:46:47.842409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:56.107 [2024-10-14 13:46:47.922954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:56.107 [2024-10-14 13:46:47.923168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:56.107 [2024-10-14 13:46:47.923203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:56.107 [2024-10-14 13:46:47.923467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:56.366 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:56.625 [2024-10-14 13:46:48.299061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.625 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:56.884 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.141 [2024-10-14 13:46:48.859419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.141 13:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:57.400 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:57.659 Malloc0 00:35:57.659 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:57.917 Delay0 00:35:57.917 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.176 13:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:58.742 NULL1 00:35:58.742 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:59.000 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=406303 00:35:59.000 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:59.000 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:35:59.000 13:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.373 Read completed with error (sct=0, sc=11) 00:36:00.373 13:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:00.373 13:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:00.373 13:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:00.631 true 00:36:00.631 13:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:00.631 13:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.564 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.564 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:01.564 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:01.821 true 00:36:02.079 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:02.079 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.337 13:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:02.595 13:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:02.595 13:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:02.853 true 00:36:02.853 13:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:02.853 13:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.110 13:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.368 13:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:03.368 13:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:03.626 true 00:36:03.626 13:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:03.626 13:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:04.560 13:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.818 13:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:04.818 13:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:05.075 true 00:36:05.075 13:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:05.075 13:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.333 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.590 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:05.590 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:05.847 true 00:36:05.847 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:05.847 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.104 13:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.362 13:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:06.619 13:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:06.619 true 00:36:06.877 13:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:06.877 13:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:07.811 13:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.069 13:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:08.069 13:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:08.327 true 00:36:08.327 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:08.327 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.584 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.842 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:08.842 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:09.101 true 00:36:09.101 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:09.101 13:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.359 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.616 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:09.616 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:09.874 true 00:36:09.874 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:09.874 13:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:10.808 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:10.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:11.064 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:11.064 13:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:11.321 true 00:36:11.321 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:11.321 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.579 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.837 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:11.837 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:12.095 true 00:36:12.095 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:12.095 13:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.352 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.611 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:12.611 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:12.869 true 00:36:12.869 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:12.869 13:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.802 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:14.059 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:14.059 13:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:14.906 true 00:36:14.906 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:14.906 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.906 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.906 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:14.906 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:15.163 true 00:36:15.163 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:15.163 13:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:16.094 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.094 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:16.094 13:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:16.350 true 00:36:16.607 13:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:16.607 13:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.865 13:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.123 13:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:17.123 13:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:17.380 true 00:36:17.380 13:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:17.380 13:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.315 13:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.573 13:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:18.573 13:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:18.831 true 00:36:18.831 13:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:18.831 13:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.089 13:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.347 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:19.347 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:19.606 true 00:36:19.606 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:19.606 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.863 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.122 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:20.122 13:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:20.380 true 00:36:20.380 13:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:20.380 13:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.314 13:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.572 13:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:21.572 13:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:21.829 true 00:36:21.829 13:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:21.830 13:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.088 13:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.346 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:22.346 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:22.603 true 00:36:22.603 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:22.603 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.861 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.119 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:23.119 13:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:23.376 true 00:36:23.376 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:23.376 13:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:24.309 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.567 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:24.567 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:24.825 true 00:36:25.083 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:25.083 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.341 13:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.598 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:25.598 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:25.856 true 00:36:25.856 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:25.856 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.114 13:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.372 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:26.372 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:26.629 true 00:36:26.629 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:26.629 13:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:27.563 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.820 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:27.820 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:28.078 true 00:36:28.078 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:28.078 13:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.336 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.593 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:28.593 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:28.851 true 00:36:28.851 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:28.851 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.109 13:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.367 Initializing NVMe Controllers 00:36:29.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:29.367 Controller IO queue size 128, less than required. 00:36:29.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:29.367 Controller IO queue size 128, less than required. 00:36:29.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:29.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:29.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:29.367 Initialization complete. Launching workers. 00:36:29.367 ======================================================== 00:36:29.367 Latency(us) 00:36:29.367 Device Information : IOPS MiB/s Average min max 00:36:29.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 708.95 0.35 75142.03 2483.44 1016461.70 00:36:29.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8550.37 4.17 14969.86 1422.34 539023.82 00:36:29.367 ======================================================== 00:36:29.367 Total : 9259.32 4.52 19577.00 1422.34 1016461.70 00:36:29.367 00:36:29.367 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:29.367 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:29.624 true 00:36:29.883 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 406303 00:36:29.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (406303) - No such process 00:36:29.883 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 406303 00:36:29.883 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.140 13:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.398 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:30.398 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:30.398 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:30.398 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:30.398 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:30.656 null0 00:36:30.656 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:30.656 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:30.656 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:30.914 null1 00:36:30.915 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:30.915 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:30.915 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:31.172 null2 00:36:31.172 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:31.172 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:31.172 13:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:31.431 null3 00:36:31.431 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:31.431 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:31.431 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:31.689 null4 00:36:31.689 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:31.689 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:31.689 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:31.947 null5 00:36:31.947 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:31.947 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:31.947 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:32.206 null6 00:36:32.206 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:32.206 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:32.206 13:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:32.465 null7 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 410188 410189 410192 410197 410199 410201 410203 410206 00:36:32.466 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.725 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.984 13:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.242 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:33.242 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:33.242 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.242 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:33.242 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.500 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:33.500 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.500 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.777 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:34.036 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.294 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.295 13:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:34.295 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.295 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.295 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:34.552 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:34.553 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.811 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:35.069 13:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:35.327 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.327 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.327 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:35.585 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.586 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:35.844 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:36.102 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.102 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.102 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:36.102 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.102 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.103 13:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.363 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.624 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:36.882 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.140 13:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.705 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:37.964 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:38.221 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:38.221 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:38.221 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:38.221 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.222 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:38.222 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:38.222 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:38.222 13:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.481 rmmod nvme_tcp 00:36:38.481 rmmod nvme_fabrics 00:36:38.481 rmmod nvme_keyring 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 405887 ']' 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 405887 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 405887 ']' 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 405887 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 405887 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 405887' 00:36:38.481 killing process with pid 405887 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 405887 00:36:38.481 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 405887 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.759 13:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.730 00:36:40.730 real 0m47.347s 00:36:40.730 user 3m17.827s 00:36:40.730 sys 0m22.128s 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:40.730 ************************************ 00:36:40.730 END TEST nvmf_ns_hotplug_stress 00:36:40.730 ************************************ 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:40.730 ************************************ 00:36:40.730 START TEST nvmf_delete_subsystem 00:36:40.730 ************************************ 00:36:40.730 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:40.989 * Looking for test storage... 00:36:40.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:40.989 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.990 --rc genhtml_branch_coverage=1 00:36:40.990 --rc genhtml_function_coverage=1 00:36:40.990 --rc genhtml_legend=1 00:36:40.990 --rc geninfo_all_blocks=1 00:36:40.990 --rc geninfo_unexecuted_blocks=1 00:36:40.990 00:36:40.990 ' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.990 --rc genhtml_branch_coverage=1 00:36:40.990 --rc genhtml_function_coverage=1 00:36:40.990 --rc genhtml_legend=1 00:36:40.990 --rc geninfo_all_blocks=1 00:36:40.990 --rc geninfo_unexecuted_blocks=1 00:36:40.990 00:36:40.990 ' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.990 --rc genhtml_branch_coverage=1 00:36:40.990 --rc genhtml_function_coverage=1 00:36:40.990 --rc genhtml_legend=1 00:36:40.990 --rc geninfo_all_blocks=1 00:36:40.990 --rc geninfo_unexecuted_blocks=1 00:36:40.990 00:36:40.990 ' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:40.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.990 --rc genhtml_branch_coverage=1 00:36:40.990 --rc genhtml_function_coverage=1 00:36:40.990 --rc genhtml_legend=1 00:36:40.990 --rc geninfo_all_blocks=1 00:36:40.990 --rc geninfo_unexecuted_blocks=1 00:36:40.990 00:36:40.990 ' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.990 13:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:43.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:43.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:43.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:43.526 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.526 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:43.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:36:43.527 00:36:43.527 --- 10.0.0.2 ping statistics --- 00:36:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.527 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:36:43.527 00:36:43.527 --- 10.0.0.1 ping statistics --- 00:36:43.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.527 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=413064 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 413064 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 413064 ']' 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.527 13:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 [2024-10-14 13:47:35.013933] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:43.527 [2024-10-14 13:47:35.015007] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:36:43.527 [2024-10-14 13:47:35.015060] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.527 [2024-10-14 13:47:35.079760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:43.527 [2024-10-14 13:47:35.127658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.527 [2024-10-14 13:47:35.127719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.527 [2024-10-14 13:47:35.127732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.527 [2024-10-14 13:47:35.127743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.527 [2024-10-14 13:47:35.127752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.527 [2024-10-14 13:47:35.129120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.527 [2024-10-14 13:47:35.129125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.527 [2024-10-14 13:47:35.222208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:43.527 [2024-10-14 13:47:35.222235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:43.527 [2024-10-14 13:47:35.222499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 [2024-10-14 13:47:35.273859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 [2024-10-14 13:47:35.294185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 NULL1 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 Delay0 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=413088 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:43.527 13:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:43.527 [2024-10-14 13:47:35.368082] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:46.055 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:46.055 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.055 13:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 starting I/O failed: -6 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 starting I/O failed: -6 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.055 Read completed with error (sct=0, sc=8) 00:36:46.055 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 [2024-10-14 13:47:37.489702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff39c000c00 is same with the state(6) to be set 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Read completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 Write completed with error (sct=0, sc=8) 00:36:46.056 starting I/O failed: -6 00:36:46.056 starting I/O failed: -6 00:36:46.056 starting I/O failed: -6 00:36:46.056 starting I/O failed: -6 00:36:46.620 [2024-10-14 13:47:38.464633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89fd00 is same with the state(6) to be set 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 [2024-10-14 13:47:38.491556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2290 is same with the state(6) to be set 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 [2024-10-14 13:47:38.491846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1ed0 is same with the state(6) to be set 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 [2024-10-14 13:47:38.492272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff39c00cfe0 is same with the state(6) to be set 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Write completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 Read completed with error (sct=0, sc=8) 00:36:46.879 [2024-10-14 13:47:38.492441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff39c00d7a0 is same with the state(6) to be set 00:36:46.879 Initializing NVMe Controllers 00:36:46.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:46.880 Controller IO queue size 128, less than required. 00:36:46.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:46.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:46.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:46.880 Initialization complete. Launching workers. 00:36:46.880 ======================================================== 00:36:46.880 Latency(us) 00:36:46.880 Device Information : IOPS MiB/s Average min max 00:36:46.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.60 0.09 901798.07 783.57 1012642.58 00:36:46.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.26 0.08 904314.12 666.68 1011985.90 00:36:46.880 ======================================================== 00:36:46.880 Total : 353.86 0.17 902980.22 666.68 1012642.58 00:36:46.880 00:36:46.880 [2024-10-14 13:47:38.493234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89fd00 (9): Bad file descriptor 00:36:46.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:46.880 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.880 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:46.880 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 413088 00:36:46.880 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 413088 00:36:47.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (413088) - No such process 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 413088 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 413088 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 413088 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:47.446 13:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:47.446 [2024-10-14 13:47:39.014060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=413493 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:47.446 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:47.446 [2024-10-14 13:47:39.064926] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:47.704 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:47.705 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:47.705 13:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:48.270 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:48.270 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:48.270 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:48.836 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:48.836 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:48.836 13:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:49.401 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:49.401 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:49.401 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:49.967 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:49.967 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:49.967 13:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:50.225 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:50.225 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:50.225 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:50.483 Initializing NVMe Controllers 00:36:50.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.483 Controller IO queue size 128, less than required. 00:36:50.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:50.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:50.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:50.483 Initialization complete. Launching workers. 00:36:50.483 ======================================================== 00:36:50.483 Latency(us) 00:36:50.484 Device Information : IOPS MiB/s Average min max 00:36:50.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004458.38 1000225.01 1012018.13 00:36:50.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004488.26 1000384.66 1011891.27 00:36:50.484 ======================================================== 00:36:50.484 Total : 256.00 0.12 1004473.32 1000225.01 1012018.13 00:36:50.484 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 413493 00:36:50.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (413493) - No such process 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 413493 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.742 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.742 rmmod nvme_tcp 00:36:50.742 rmmod nvme_fabrics 00:36:50.742 rmmod nvme_keyring 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 413064 ']' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 413064 ']' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413064' 00:36:51.001 killing process with pid 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 413064 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.001 13:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.541 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.541 00:36:53.541 real 0m12.304s 00:36:53.542 user 0m24.556s 00:36:53.542 sys 0m3.778s 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:53.542 ************************************ 00:36:53.542 END TEST nvmf_delete_subsystem 00:36:53.542 ************************************ 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:53.542 ************************************ 00:36:53.542 START TEST nvmf_host_management 00:36:53.542 ************************************ 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:53.542 * Looking for test storage... 00:36:53.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:36:53.542 13:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:53.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.542 --rc genhtml_branch_coverage=1 00:36:53.542 --rc genhtml_function_coverage=1 00:36:53.542 --rc genhtml_legend=1 00:36:53.542 --rc geninfo_all_blocks=1 00:36:53.542 --rc geninfo_unexecuted_blocks=1 00:36:53.542 00:36:53.542 ' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:53.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.542 --rc genhtml_branch_coverage=1 00:36:53.542 --rc genhtml_function_coverage=1 00:36:53.542 --rc genhtml_legend=1 00:36:53.542 --rc geninfo_all_blocks=1 00:36:53.542 --rc geninfo_unexecuted_blocks=1 00:36:53.542 00:36:53.542 ' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:53.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.542 --rc genhtml_branch_coverage=1 00:36:53.542 --rc genhtml_function_coverage=1 00:36:53.542 --rc genhtml_legend=1 00:36:53.542 --rc geninfo_all_blocks=1 00:36:53.542 --rc geninfo_unexecuted_blocks=1 00:36:53.542 00:36:53.542 ' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:53.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.542 --rc genhtml_branch_coverage=1 00:36:53.542 --rc genhtml_function_coverage=1 00:36:53.542 --rc genhtml_legend=1 00:36:53.542 --rc geninfo_all_blocks=1 00:36:53.542 --rc geninfo_unexecuted_blocks=1 00:36:53.542 00:36:53.542 ' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.542 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.543 13:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:55.451 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:55.452 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:55.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:55.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:55.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:55.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:55.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:36:55.452 00:36:55.452 --- 10.0.0.2 ping statistics --- 00:36:55.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.452 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:55.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:55.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:36:55.452 00:36:55.452 --- 10.0.0.1 ping statistics --- 00:36:55.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.452 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=415902 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 415902 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 415902 ']' 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.452 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.710 [2024-10-14 13:47:47.346550] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:55.710 [2024-10-14 13:47:47.347670] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:36:55.710 [2024-10-14 13:47:47.347727] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.710 [2024-10-14 13:47:47.417522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:55.710 [2024-10-14 13:47:47.472845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:55.710 [2024-10-14 13:47:47.472898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:55.710 [2024-10-14 13:47:47.472912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:55.710 [2024-10-14 13:47:47.472924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:55.710 [2024-10-14 13:47:47.472934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:55.710 [2024-10-14 13:47:47.474651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:55.710 [2024-10-14 13:47:47.476148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:55.710 [2024-10-14 13:47:47.476200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:55.710 [2024-10-14 13:47:47.476203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.969 [2024-10-14 13:47:47.569761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:55.969 [2024-10-14 13:47:47.569920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:55.969 [2024-10-14 13:47:47.570198] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:55.969 [2024-10-14 13:47:47.570753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:55.969 [2024-10-14 13:47:47.570963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 [2024-10-14 13:47:47.620865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 Malloc0 00:36:55.969 [2024-10-14 13:47:47.693034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=415996 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 415996 /var/tmp/bdevperf.sock 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 415996 ']' 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:55.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:55.969 { 00:36:55.969 "params": { 00:36:55.969 "name": "Nvme$subsystem", 00:36:55.969 "trtype": "$TEST_TRANSPORT", 00:36:55.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:55.969 "adrfam": "ipv4", 00:36:55.969 "trsvcid": "$NVMF_PORT", 00:36:55.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:55.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:55.969 "hdgst": ${hdgst:-false}, 00:36:55.969 "ddgst": ${ddgst:-false} 00:36:55.969 }, 00:36:55.969 "method": "bdev_nvme_attach_controller" 00:36:55.969 } 00:36:55.969 EOF 00:36:55.969 )") 00:36:55.969 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:36:55.970 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:36:55.970 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:36:55.970 13:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:55.970 "params": { 00:36:55.970 "name": "Nvme0", 00:36:55.970 "trtype": "tcp", 00:36:55.970 "traddr": "10.0.0.2", 00:36:55.970 "adrfam": "ipv4", 00:36:55.970 "trsvcid": "4420", 00:36:55.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.970 "hdgst": false, 00:36:55.970 "ddgst": false 00:36:55.970 }, 00:36:55.970 "method": "bdev_nvme_attach_controller" 00:36:55.970 }' 00:36:55.970 [2024-10-14 13:47:47.778591] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:36:55.970 [2024-10-14 13:47:47.778680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415996 ] 00:36:56.227 [2024-10-14 13:47:47.841403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.227 [2024-10-14 13:47:47.888366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.485 Running I/O for 10 seconds... 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:56.485 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.745 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:56.745 [2024-10-14 13:47:48.580877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.580942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.580958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.580970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.580983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.581013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.581026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.581039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.745 [2024-10-14 13:47:48.581051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.581338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138af30 is same with the state(6) to be set 00:36:56.746 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.746 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:56.746 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.746 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:56.746 [2024-10-14 13:47:48.586696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:56.746 [2024-10-14 13:47:48.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.586760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:56.746 [2024-10-14 13:47:48.586775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.586789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:56.746 [2024-10-14 13:47:48.586804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.586818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:56.746 [2024-10-14 13:47:48.586832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.586845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4de00 is same with the state(6) to be set 00:36:56.746 [2024-10-14 13:47:48.586929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.586950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.586976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.586991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.746 [2024-10-14 13:47:48.587742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.746 [2024-10-14 13:47:48.587756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.587981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.587996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.588900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.747 [2024-10-14 13:47:48.588913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:56.747 [2024-10-14 13:47:48.589003] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd66e80 was disconnected and freed. reset controller. 00:36:56.747 [2024-10-14 13:47:48.590160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:56.747 task offset: 81920 on job bdev=Nvme0n1 fails 00:36:56.747 00:36:56.747 Latency(us) 00:36:56.747 [2024-10-14T11:47:48.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.747 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:56.747 Job: Nvme0n1 ended in about 0.41 seconds with error 00:36:56.747 Verification LBA range: start 0x0 length 0x400 00:36:56.747 Nvme0n1 : 0.41 1567.11 97.94 156.71 0.00 36077.72 2864.17 35146.71 00:36:56.747 [2024-10-14T11:47:48.600Z] =================================================================================================================== 00:36:56.747 [2024-10-14T11:47:48.601Z] Total : 1567.11 97.94 156.71 0.00 36077.72 2864.17 35146.71 00:36:56.748 [2024-10-14 13:47:48.592024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:56.748 [2024-10-14 13:47:48.592053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4de00 (9): Bad file descriptor 00:36:56.748 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.748 13:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:56.748 [2024-10-14 13:47:48.596098] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 415996 00:36:58.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (415996) - No such process 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:58.121 { 00:36:58.121 "params": { 00:36:58.121 "name": "Nvme$subsystem", 00:36:58.121 "trtype": "$TEST_TRANSPORT", 00:36:58.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.121 "adrfam": "ipv4", 00:36:58.121 "trsvcid": "$NVMF_PORT", 00:36:58.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.121 "hdgst": ${hdgst:-false}, 00:36:58.121 "ddgst": ${ddgst:-false} 00:36:58.121 }, 00:36:58.121 "method": "bdev_nvme_attach_controller" 00:36:58.121 } 00:36:58.121 EOF 00:36:58.121 )") 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:36:58.121 13:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:58.121 "params": { 00:36:58.121 "name": "Nvme0", 00:36:58.121 "trtype": "tcp", 00:36:58.121 "traddr": "10.0.0.2", 00:36:58.121 "adrfam": "ipv4", 00:36:58.121 "trsvcid": "4420", 00:36:58.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.121 "hdgst": false, 00:36:58.121 "ddgst": false 00:36:58.121 }, 00:36:58.121 "method": "bdev_nvme_attach_controller" 00:36:58.121 }' 00:36:58.121 [2024-10-14 13:47:49.642636] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:36:58.121 [2024-10-14 13:47:49.642729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416249 ] 00:36:58.121 [2024-10-14 13:47:49.703807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.121 [2024-10-14 13:47:49.749743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.121 Running I/O for 1 seconds... 00:36:59.496 1600.00 IOPS, 100.00 MiB/s 00:36:59.496 Latency(us) 00:36:59.496 [2024-10-14T11:47:51.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.496 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:59.496 Verification LBA range: start 0x0 length 0x400 00:36:59.496 Nvme0n1 : 1.01 1645.41 102.84 0.00 0.00 38269.98 4878.79 33787.45 00:36:59.496 [2024-10-14T11:47:51.349Z] =================================================================================================================== 00:36:59.496 [2024-10-14T11:47:51.349Z] Total : 1645.41 102.84 0.00 0.00 38269.98 4878.79 33787.45 00:36:59.496 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:59.496 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.497 rmmod nvme_tcp 00:36:59.497 rmmod nvme_fabrics 00:36:59.497 rmmod nvme_keyring 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 415902 ']' 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 415902 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 415902 ']' 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 415902 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 415902 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 415902' 00:36:59.497 killing process with pid 415902 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 415902 00:36:59.497 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 415902 00:36:59.756 [2024-10-14 13:47:51.381616] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.756 13:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.661 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.661 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:01.661 00:37:01.661 real 0m8.535s 00:37:01.661 user 0m16.549s 00:37:01.662 sys 0m3.748s 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:01.662 ************************************ 00:37:01.662 END TEST nvmf_host_management 00:37:01.662 ************************************ 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:01.662 ************************************ 00:37:01.662 START TEST nvmf_lvol 00:37:01.662 ************************************ 00:37:01.662 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:01.921 * Looking for test storage... 00:37:01.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.921 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:01.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.922 --rc genhtml_branch_coverage=1 00:37:01.922 --rc genhtml_function_coverage=1 00:37:01.922 --rc genhtml_legend=1 00:37:01.922 --rc geninfo_all_blocks=1 00:37:01.922 --rc geninfo_unexecuted_blocks=1 00:37:01.922 00:37:01.922 ' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:01.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.922 --rc genhtml_branch_coverage=1 00:37:01.922 --rc genhtml_function_coverage=1 00:37:01.922 --rc genhtml_legend=1 00:37:01.922 --rc geninfo_all_blocks=1 00:37:01.922 --rc geninfo_unexecuted_blocks=1 00:37:01.922 00:37:01.922 ' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:01.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.922 --rc genhtml_branch_coverage=1 00:37:01.922 --rc genhtml_function_coverage=1 00:37:01.922 --rc genhtml_legend=1 00:37:01.922 --rc geninfo_all_blocks=1 00:37:01.922 --rc geninfo_unexecuted_blocks=1 00:37:01.922 00:37:01.922 ' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:01.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.922 --rc genhtml_branch_coverage=1 00:37:01.922 --rc genhtml_function_coverage=1 00:37:01.922 --rc genhtml_legend=1 00:37:01.922 --rc geninfo_all_blocks=1 00:37:01.922 --rc geninfo_unexecuted_blocks=1 00:37:01.922 00:37:01.922 ' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.922 13:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:04.455 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:04.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:04.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:04.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:04.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:37:04.456 00:37:04.456 --- 10.0.0.2 ping statistics --- 00:37:04.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.456 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:37:04.456 00:37:04.456 --- 10.0.0.1 ping statistics --- 00:37:04.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.456 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=418343 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 418343 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 418343 ']' 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:04.456 13:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:04.456 [2024-10-14 13:47:55.894790] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:04.456 [2024-10-14 13:47:55.895855] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:37:04.456 [2024-10-14 13:47:55.895906] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.456 [2024-10-14 13:47:55.959059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:04.456 [2024-10-14 13:47:56.006108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.456 [2024-10-14 13:47:56.006175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.456 [2024-10-14 13:47:56.006200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.457 [2024-10-14 13:47:56.006211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.457 [2024-10-14 13:47:56.006220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.457 [2024-10-14 13:47:56.007621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.457 [2024-10-14 13:47:56.007708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:04.457 [2024-10-14 13:47:56.007712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.457 [2024-10-14 13:47:56.090050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:04.457 [2024-10-14 13:47:56.090282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:04.457 [2024-10-14 13:47:56.090283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:04.457 [2024-10-14 13:47:56.090539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:04.457 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:04.714 [2024-10-14 13:47:56.384379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.714 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:04.972 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:04.972 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:05.230 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:05.230 13:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:05.487 13:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:05.744 13:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c33a386d-faca-4d12-93b3-c2790082d0b8 00:37:05.744 13:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c33a386d-faca-4d12-93b3-c2790082d0b8 lvol 20 00:37:06.002 13:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4b3f8f20-685d-4e20-ab42-cea64900ac89 00:37:06.002 13:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:06.569 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b3f8f20-685d-4e20-ab42-cea64900ac89 00:37:06.569 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:06.828 [2024-10-14 13:47:58.624555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.828 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:07.085 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=418760 00:37:07.085 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:07.085 13:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:08.458 13:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4b3f8f20-685d-4e20-ab42-cea64900ac89 MY_SNAPSHOT 00:37:08.458 13:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a5aa2c59-3af6-471d-91ae-b06df3b153f7 00:37:08.458 13:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4b3f8f20-685d-4e20-ab42-cea64900ac89 30 00:37:09.023 13:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a5aa2c59-3af6-471d-91ae-b06df3b153f7 MY_CLONE 00:37:09.284 13:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e462513a-dbb1-4146-9d95-e909d67652d1 00:37:09.284 13:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e462513a-dbb1-4146-9d95-e909d67652d1 00:37:09.853 13:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 418760 00:37:17.960 Initializing NVMe Controllers 00:37:17.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:17.960 Controller IO queue size 128, less than required. 00:37:17.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:17.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:17.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:17.960 Initialization complete. Launching workers. 00:37:17.960 ======================================================== 00:37:17.960 Latency(us) 00:37:17.960 Device Information : IOPS MiB/s Average min max 00:37:17.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10355.90 40.45 12370.89 2091.30 56409.54 00:37:17.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10450.60 40.82 12257.95 3659.77 52615.21 00:37:17.960 ======================================================== 00:37:17.960 Total : 20806.50 81.28 12314.16 2091.30 56409.54 00:37:17.960 00:37:17.960 13:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.960 13:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4b3f8f20-685d-4e20-ab42-cea64900ac89 00:37:18.219 13:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c33a386d-faca-4d12-93b3-c2790082d0b8 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.477 rmmod nvme_tcp 00:37:18.477 rmmod nvme_fabrics 00:37:18.477 rmmod nvme_keyring 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 418343 ']' 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 418343 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 418343 ']' 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 418343 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 418343 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 418343' 00:37:18.477 killing process with pid 418343 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 418343 00:37:18.477 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 418343 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.735 13:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.268 00:37:21.268 real 0m19.061s 00:37:21.268 user 0m56.508s 00:37:21.268 sys 0m7.726s 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:21.268 ************************************ 00:37:21.268 END TEST nvmf_lvol 00:37:21.268 ************************************ 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:21.268 ************************************ 00:37:21.268 START TEST nvmf_lvs_grow 00:37:21.268 ************************************ 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:21.268 * Looking for test storage... 00:37:21.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:21.268 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.269 --rc genhtml_branch_coverage=1 00:37:21.269 --rc genhtml_function_coverage=1 00:37:21.269 --rc genhtml_legend=1 00:37:21.269 --rc geninfo_all_blocks=1 00:37:21.269 --rc geninfo_unexecuted_blocks=1 00:37:21.269 00:37:21.269 ' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.269 --rc genhtml_branch_coverage=1 00:37:21.269 --rc genhtml_function_coverage=1 00:37:21.269 --rc genhtml_legend=1 00:37:21.269 --rc geninfo_all_blocks=1 00:37:21.269 --rc geninfo_unexecuted_blocks=1 00:37:21.269 00:37:21.269 ' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.269 --rc genhtml_branch_coverage=1 00:37:21.269 --rc genhtml_function_coverage=1 00:37:21.269 --rc genhtml_legend=1 00:37:21.269 --rc geninfo_all_blocks=1 00:37:21.269 --rc geninfo_unexecuted_blocks=1 00:37:21.269 00:37:21.269 ' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.269 --rc genhtml_branch_coverage=1 00:37:21.269 --rc genhtml_function_coverage=1 00:37:21.269 --rc genhtml_legend=1 00:37:21.269 --rc geninfo_all_blocks=1 00:37:21.269 --rc geninfo_unexecuted_blocks=1 00:37:21.269 00:37:21.269 ' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:21.269 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.270 13:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:23.173 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:23.173 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:23.173 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:23.173 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.173 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.174 13:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:37:23.174 00:37:23.174 --- 10.0.0.2 ping statistics --- 00:37:23.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.174 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:37:23.174 00:37:23.174 --- 10.0.0.1 ping statistics --- 00:37:23.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.174 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:23.174 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=422633 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 422633 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 422633 ']' 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.432 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:23.432 [2024-10-14 13:48:15.095117] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.432 [2024-10-14 13:48:15.096175] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:37:23.432 [2024-10-14 13:48:15.096250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.432 [2024-10-14 13:48:15.160149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.432 [2024-10-14 13:48:15.205893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.432 [2024-10-14 13:48:15.205965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.432 [2024-10-14 13:48:15.205988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.432 [2024-10-14 13:48:15.205999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.432 [2024-10-14 13:48:15.206009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.432 [2024-10-14 13:48:15.206586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.690 [2024-10-14 13:48:15.289354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:23.690 [2024-10-14 13:48:15.289752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.690 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:23.949 [2024-10-14 13:48:15.591251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:23.949 ************************************ 00:37:23.949 START TEST lvs_grow_clean 00:37:23.949 ************************************ 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:23.949 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:24.207 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:24.207 13:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:24.467 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=76c67ec1-503b-4247-9858-3acac65e9548 00:37:24.467 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:24.467 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:24.725 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:24.725 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:24.725 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 76c67ec1-503b-4247-9858-3acac65e9548 lvol 150 00:37:24.984 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=408ef36a-8e4e-4f17-96b9-1250c5ff4066 00:37:24.984 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:24.984 13:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:25.242 [2024-10-14 13:48:17.019262] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:25.242 [2024-10-14 13:48:17.019463] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:25.242 true 00:37:25.242 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:25.242 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:25.500 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:25.500 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:25.758 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 408ef36a-8e4e-4f17-96b9-1250c5ff4066 00:37:26.015 13:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:26.274 [2024-10-14 13:48:18.091383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.274 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=423062 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 423062 /var/tmp/bdevperf.sock 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 423062 ']' 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:26.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:26.534 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:26.791 [2024-10-14 13:48:18.419588] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:37:26.791 [2024-10-14 13:48:18.419662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423062 ] 00:37:26.791 [2024-10-14 13:48:18.480060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.791 [2024-10-14 13:48:18.531491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.049 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:27.049 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:27.049 13:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:27.307 Nvme0n1 00:37:27.307 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:27.565 [ 00:37:27.565 { 00:37:27.565 "name": "Nvme0n1", 00:37:27.565 "aliases": [ 00:37:27.565 "408ef36a-8e4e-4f17-96b9-1250c5ff4066" 00:37:27.565 ], 00:37:27.565 "product_name": "NVMe disk", 00:37:27.565 "block_size": 4096, 00:37:27.565 "num_blocks": 38912, 00:37:27.565 "uuid": "408ef36a-8e4e-4f17-96b9-1250c5ff4066", 00:37:27.565 "numa_id": 0, 00:37:27.565 "assigned_rate_limits": { 00:37:27.565 "rw_ios_per_sec": 0, 00:37:27.565 "rw_mbytes_per_sec": 0, 00:37:27.565 "r_mbytes_per_sec": 0, 00:37:27.565 "w_mbytes_per_sec": 0 00:37:27.565 }, 00:37:27.565 "claimed": false, 00:37:27.565 "zoned": false, 00:37:27.565 "supported_io_types": { 00:37:27.565 "read": true, 00:37:27.565 "write": true, 00:37:27.565 "unmap": true, 00:37:27.565 "flush": true, 00:37:27.565 "reset": true, 00:37:27.565 "nvme_admin": true, 00:37:27.565 "nvme_io": true, 00:37:27.565 "nvme_io_md": false, 00:37:27.565 "write_zeroes": true, 00:37:27.565 "zcopy": false, 00:37:27.565 "get_zone_info": false, 00:37:27.565 "zone_management": false, 00:37:27.565 "zone_append": false, 00:37:27.565 "compare": true, 00:37:27.565 "compare_and_write": true, 00:37:27.565 "abort": true, 00:37:27.565 "seek_hole": false, 00:37:27.565 "seek_data": false, 00:37:27.565 "copy": true, 00:37:27.565 "nvme_iov_md": false 00:37:27.565 }, 00:37:27.565 "memory_domains": [ 00:37:27.565 { 00:37:27.565 "dma_device_id": "system", 00:37:27.565 "dma_device_type": 1 00:37:27.565 } 00:37:27.565 ], 00:37:27.565 "driver_specific": { 00:37:27.565 "nvme": [ 00:37:27.565 { 00:37:27.565 "trid": { 00:37:27.565 "trtype": "TCP", 00:37:27.565 "adrfam": "IPv4", 00:37:27.565 "traddr": "10.0.0.2", 00:37:27.565 "trsvcid": "4420", 00:37:27.565 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:27.565 }, 00:37:27.565 "ctrlr_data": { 00:37:27.565 "cntlid": 1, 00:37:27.565 "vendor_id": "0x8086", 00:37:27.565 "model_number": "SPDK bdev Controller", 00:37:27.565 "serial_number": "SPDK0", 00:37:27.565 "firmware_revision": "25.01", 00:37:27.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.565 "oacs": { 00:37:27.565 "security": 0, 00:37:27.565 "format": 0, 00:37:27.565 "firmware": 0, 00:37:27.565 "ns_manage": 0 00:37:27.565 }, 00:37:27.565 "multi_ctrlr": true, 00:37:27.565 "ana_reporting": false 00:37:27.565 }, 00:37:27.565 "vs": { 00:37:27.565 "nvme_version": "1.3" 00:37:27.565 }, 00:37:27.565 "ns_data": { 00:37:27.565 "id": 1, 00:37:27.565 "can_share": true 00:37:27.565 } 00:37:27.565 } 00:37:27.565 ], 00:37:27.565 "mp_policy": "active_passive" 00:37:27.565 } 00:37:27.565 } 00:37:27.565 ] 00:37:27.565 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=423194 00:37:27.565 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:27.565 13:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:27.565 Running I/O for 10 seconds... 00:37:28.939 Latency(us) 00:37:28.939 [2024-10-14T11:48:20.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.939 Nvme0n1 : 1.00 14753.00 57.63 0.00 0.00 0.00 0.00 0.00 00:37:28.939 [2024-10-14T11:48:20.792Z] =================================================================================================================== 00:37:28.939 [2024-10-14T11:48:20.792Z] Total : 14753.00 57.63 0.00 0.00 0.00 0.00 0.00 00:37:28.939 00:37:29.503 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:29.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.761 Nvme0n1 : 2.00 14911.00 58.25 0.00 0.00 0.00 0.00 0.00 00:37:29.761 [2024-10-14T11:48:21.614Z] =================================================================================================================== 00:37:29.761 [2024-10-14T11:48:21.614Z] Total : 14911.00 58.25 0.00 0.00 0.00 0.00 0.00 00:37:29.761 00:37:29.761 true 00:37:29.761 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:29.761 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:30.326 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:30.326 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:30.326 13:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 423194 00:37:30.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.584 Nvme0n1 : 3.00 15032.67 58.72 0.00 0.00 0.00 0.00 0.00 00:37:30.584 [2024-10-14T11:48:22.437Z] =================================================================================================================== 00:37:30.584 [2024-10-14T11:48:22.437Z] Total : 15032.67 58.72 0.00 0.00 0.00 0.00 0.00 00:37:30.584 00:37:31.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.957 Nvme0n1 : 4.00 15123.75 59.08 0.00 0.00 0.00 0.00 0.00 00:37:31.957 [2024-10-14T11:48:23.810Z] =================================================================================================================== 00:37:31.957 [2024-10-14T11:48:23.810Z] Total : 15123.75 59.08 0.00 0.00 0.00 0.00 0.00 00:37:31.957 00:37:32.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.891 Nvme0n1 : 5.00 15201.80 59.38 0.00 0.00 0.00 0.00 0.00 00:37:32.891 [2024-10-14T11:48:24.744Z] =================================================================================================================== 00:37:32.891 [2024-10-14T11:48:24.744Z] Total : 15201.80 59.38 0.00 0.00 0.00 0.00 0.00 00:37:32.891 00:37:33.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.824 Nvme0n1 : 6.00 15265.50 59.63 0.00 0.00 0.00 0.00 0.00 00:37:33.824 [2024-10-14T11:48:25.677Z] =================================================================================================================== 00:37:33.824 [2024-10-14T11:48:25.677Z] Total : 15265.50 59.63 0.00 0.00 0.00 0.00 0.00 00:37:33.824 00:37:34.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.756 Nvme0n1 : 7.00 15302.86 59.78 0.00 0.00 0.00 0.00 0.00 00:37:34.756 [2024-10-14T11:48:26.609Z] =================================================================================================================== 00:37:34.756 [2024-10-14T11:48:26.609Z] Total : 15302.86 59.78 0.00 0.00 0.00 0.00 0.00 00:37:34.756 00:37:35.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:35.689 Nvme0n1 : 8.00 15337.12 59.91 0.00 0.00 0.00 0.00 0.00 00:37:35.689 [2024-10-14T11:48:27.542Z] =================================================================================================================== 00:37:35.689 [2024-10-14T11:48:27.542Z] Total : 15337.12 59.91 0.00 0.00 0.00 0.00 0.00 00:37:35.689 00:37:36.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:36.634 Nvme0n1 : 9.00 15377.89 60.07 0.00 0.00 0.00 0.00 0.00 00:37:36.634 [2024-10-14T11:48:28.487Z] =================================================================================================================== 00:37:36.634 [2024-10-14T11:48:28.487Z] Total : 15377.89 60.07 0.00 0.00 0.00 0.00 0.00 00:37:36.634 00:37:37.684 00:37:37.684 Latency(us) 00:37:37.684 [2024-10-14T11:48:29.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:37.684 Nvme0n1 : 10.01 15396.17 60.14 0.00 0.00 8308.18 4538.97 18738.44 00:37:37.684 [2024-10-14T11:48:29.537Z] =================================================================================================================== 00:37:37.684 [2024-10-14T11:48:29.537Z] Total : 15396.17 60.14 0.00 0.00 8308.18 4538.97 18738.44 00:37:37.684 { 00:37:37.684 "results": [ 00:37:37.684 { 00:37:37.684 "job": "Nvme0n1", 00:37:37.684 "core_mask": "0x2", 00:37:37.684 "workload": "randwrite", 00:37:37.684 "status": "finished", 00:37:37.684 "queue_depth": 128, 00:37:37.684 "io_size": 4096, 00:37:37.684 "runtime": 10.005084, 00:37:37.684 "iops": 15396.17258585735, 00:37:37.684 "mibps": 60.14129916350527, 00:37:37.684 "io_failed": 0, 00:37:37.684 "io_timeout": 0, 00:37:37.684 "avg_latency_us": 8308.18132225396, 00:37:37.684 "min_latency_us": 4538.974814814815, 00:37:37.684 "max_latency_us": 18738.44148148148 00:37:37.684 } 00:37:37.684 ], 00:37:37.684 "core_count": 1 00:37:37.684 } 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 423062 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 423062 ']' 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 423062 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 423062 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 423062' 00:37:37.684 killing process with pid 423062 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 423062 00:37:37.684 Received shutdown signal, test time was about 10.000000 seconds 00:37:37.684 00:37:37.684 Latency(us) 00:37:37.684 [2024-10-14T11:48:29.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.684 [2024-10-14T11:48:29.537Z] =================================================================================================================== 00:37:37.684 [2024-10-14T11:48:29.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:37.684 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 423062 00:37:37.961 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:38.219 13:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:38.477 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:38.477 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:38.735 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:38.735 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:38.735 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:38.993 [2024-10-14 13:48:30.779161] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:38.993 13:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:39.252 request: 00:37:39.252 { 00:37:39.252 "uuid": "76c67ec1-503b-4247-9858-3acac65e9548", 00:37:39.252 "method": "bdev_lvol_get_lvstores", 00:37:39.252 "req_id": 1 00:37:39.252 } 00:37:39.252 Got JSON-RPC error response 00:37:39.252 response: 00:37:39.252 { 00:37:39.252 "code": -19, 00:37:39.252 "message": "No such device" 00:37:39.252 } 00:37:39.252 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:37:39.252 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.252 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.252 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.252 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:39.510 aio_bdev 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 408ef36a-8e4e-4f17-96b9-1250c5ff4066 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=408ef36a-8e4e-4f17-96b9-1250c5ff4066 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:39.510 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:39.769 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 408ef36a-8e4e-4f17-96b9-1250c5ff4066 -t 2000 00:37:40.027 [ 00:37:40.027 { 00:37:40.027 "name": "408ef36a-8e4e-4f17-96b9-1250c5ff4066", 00:37:40.027 "aliases": [ 00:37:40.027 "lvs/lvol" 00:37:40.027 ], 00:37:40.027 "product_name": "Logical Volume", 00:37:40.027 "block_size": 4096, 00:37:40.027 "num_blocks": 38912, 00:37:40.027 "uuid": "408ef36a-8e4e-4f17-96b9-1250c5ff4066", 00:37:40.027 "assigned_rate_limits": { 00:37:40.027 "rw_ios_per_sec": 0, 00:37:40.027 "rw_mbytes_per_sec": 0, 00:37:40.027 "r_mbytes_per_sec": 0, 00:37:40.027 "w_mbytes_per_sec": 0 00:37:40.027 }, 00:37:40.027 "claimed": false, 00:37:40.027 "zoned": false, 00:37:40.027 "supported_io_types": { 00:37:40.027 "read": true, 00:37:40.027 "write": true, 00:37:40.027 "unmap": true, 00:37:40.027 "flush": false, 00:37:40.027 "reset": true, 00:37:40.027 "nvme_admin": false, 00:37:40.027 "nvme_io": false, 00:37:40.027 "nvme_io_md": false, 00:37:40.027 "write_zeroes": true, 00:37:40.027 "zcopy": false, 00:37:40.027 "get_zone_info": false, 00:37:40.027 "zone_management": false, 00:37:40.027 "zone_append": false, 00:37:40.027 "compare": false, 00:37:40.027 "compare_and_write": false, 00:37:40.027 "abort": false, 00:37:40.027 "seek_hole": true, 00:37:40.027 "seek_data": true, 00:37:40.027 "copy": false, 00:37:40.027 "nvme_iov_md": false 00:37:40.027 }, 00:37:40.027 "driver_specific": { 00:37:40.027 "lvol": { 00:37:40.027 "lvol_store_uuid": "76c67ec1-503b-4247-9858-3acac65e9548", 00:37:40.027 "base_bdev": "aio_bdev", 00:37:40.027 "thin_provision": false, 00:37:40.027 "num_allocated_clusters": 38, 00:37:40.027 "snapshot": false, 00:37:40.027 "clone": false, 00:37:40.027 "esnap_clone": false 00:37:40.027 } 00:37:40.027 } 00:37:40.027 } 00:37:40.027 ] 00:37:40.285 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:37:40.285 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:40.285 13:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:40.543 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:40.543 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:40.543 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:40.801 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:40.801 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 408ef36a-8e4e-4f17-96b9-1250c5ff4066 00:37:41.059 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76c67ec1-503b-4247-9858-3acac65e9548 00:37:41.317 13:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:41.575 00:37:41.575 real 0m17.636s 00:37:41.575 user 0m17.190s 00:37:41.575 sys 0m1.816s 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:41.575 ************************************ 00:37:41.575 END TEST lvs_grow_clean 00:37:41.575 ************************************ 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:41.575 ************************************ 00:37:41.575 START TEST lvs_grow_dirty 00:37:41.575 ************************************ 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:41.575 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:41.833 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:41.833 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:42.091 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:42.091 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:42.091 13:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:42.348 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:42.348 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:42.348 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 74db5065-76c8-452a-950a-f8e8a7f606fc lvol 150 00:37:42.607 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:42.607 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:42.607 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:42.865 [2024-10-14 13:48:34.687071] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:42.865 [2024-10-14 13:48:34.687218] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:42.865 true 00:37:42.865 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:42.865 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:43.123 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:43.123 13:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:43.688 13:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:43.688 13:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:43.946 [2024-10-14 13:48:35.775403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.946 13:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=425106 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 425106 /var/tmp/bdevperf.sock 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 425106 ']' 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:44.511 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:44.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:44.512 [2024-10-14 13:48:36.117959] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:37:44.512 [2024-10-14 13:48:36.118049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425106 ] 00:37:44.512 [2024-10-14 13:48:36.184743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.512 [2024-10-14 13:48:36.238514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:44.512 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:45.077 Nvme0n1 00:37:45.077 13:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:45.334 [ 00:37:45.334 { 00:37:45.334 "name": "Nvme0n1", 00:37:45.334 "aliases": [ 00:37:45.334 "ffecd671-8ba5-4f75-af1c-0b943e0d6603" 00:37:45.334 ], 00:37:45.334 "product_name": "NVMe disk", 00:37:45.334 "block_size": 4096, 00:37:45.334 "num_blocks": 38912, 00:37:45.334 "uuid": "ffecd671-8ba5-4f75-af1c-0b943e0d6603", 00:37:45.334 "numa_id": 0, 00:37:45.334 "assigned_rate_limits": { 00:37:45.334 "rw_ios_per_sec": 0, 00:37:45.334 "rw_mbytes_per_sec": 0, 00:37:45.334 "r_mbytes_per_sec": 0, 00:37:45.334 "w_mbytes_per_sec": 0 00:37:45.334 }, 00:37:45.334 "claimed": false, 00:37:45.334 "zoned": false, 00:37:45.334 "supported_io_types": { 00:37:45.334 "read": true, 00:37:45.334 "write": true, 00:37:45.334 "unmap": true, 00:37:45.334 "flush": true, 00:37:45.334 "reset": true, 00:37:45.334 "nvme_admin": true, 00:37:45.334 "nvme_io": true, 00:37:45.334 "nvme_io_md": false, 00:37:45.334 "write_zeroes": true, 00:37:45.334 "zcopy": false, 00:37:45.334 "get_zone_info": false, 00:37:45.334 "zone_management": false, 00:37:45.334 "zone_append": false, 00:37:45.334 "compare": true, 00:37:45.334 "compare_and_write": true, 00:37:45.334 "abort": true, 00:37:45.334 "seek_hole": false, 00:37:45.334 "seek_data": false, 00:37:45.334 "copy": true, 00:37:45.334 "nvme_iov_md": false 00:37:45.334 }, 00:37:45.334 "memory_domains": [ 00:37:45.334 { 00:37:45.334 "dma_device_id": "system", 00:37:45.334 "dma_device_type": 1 00:37:45.334 } 00:37:45.334 ], 00:37:45.334 "driver_specific": { 00:37:45.334 "nvme": [ 00:37:45.334 { 00:37:45.334 "trid": { 00:37:45.334 "trtype": "TCP", 00:37:45.334 "adrfam": "IPv4", 00:37:45.334 "traddr": "10.0.0.2", 00:37:45.334 "trsvcid": "4420", 00:37:45.335 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:45.335 }, 00:37:45.335 "ctrlr_data": { 00:37:45.335 "cntlid": 1, 00:37:45.335 "vendor_id": "0x8086", 00:37:45.335 "model_number": "SPDK bdev Controller", 00:37:45.335 "serial_number": "SPDK0", 00:37:45.335 "firmware_revision": "25.01", 00:37:45.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:45.335 "oacs": { 00:37:45.335 "security": 0, 00:37:45.335 "format": 0, 00:37:45.335 "firmware": 0, 00:37:45.335 "ns_manage": 0 00:37:45.335 }, 00:37:45.335 "multi_ctrlr": true, 00:37:45.335 "ana_reporting": false 00:37:45.335 }, 00:37:45.335 "vs": { 00:37:45.335 "nvme_version": "1.3" 00:37:45.335 }, 00:37:45.335 "ns_data": { 00:37:45.335 "id": 1, 00:37:45.335 "can_share": true 00:37:45.335 } 00:37:45.335 } 00:37:45.335 ], 00:37:45.335 "mp_policy": "active_passive" 00:37:45.335 } 00:37:45.335 } 00:37:45.335 ] 00:37:45.335 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=425238 00:37:45.335 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:45.335 13:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:45.592 Running I/O for 10 seconds... 00:37:46.525 Latency(us) 00:37:46.525 [2024-10-14T11:48:38.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.525 Nvme0n1 : 1.00 14835.00 57.95 0.00 0.00 0.00 0.00 0.00 00:37:46.525 [2024-10-14T11:48:38.378Z] =================================================================================================================== 00:37:46.525 [2024-10-14T11:48:38.378Z] Total : 14835.00 57.95 0.00 0.00 0.00 0.00 0.00 00:37:46.525 00:37:47.458 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:47.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.458 Nvme0n1 : 2.00 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:37:47.458 [2024-10-14T11:48:39.311Z] =================================================================================================================== 00:37:47.458 [2024-10-14T11:48:39.311Z] Total : 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:37:47.458 00:37:47.716 true 00:37:47.716 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:47.716 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:47.973 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:47.973 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:47.973 13:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 425238 00:37:48.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.539 Nvme0n1 : 3.00 15083.00 58.92 0.00 0.00 0.00 0.00 0.00 00:37:48.539 [2024-10-14T11:48:40.392Z] =================================================================================================================== 00:37:48.539 [2024-10-14T11:48:40.392Z] Total : 15083.00 58.92 0.00 0.00 0.00 0.00 0.00 00:37:48.539 00:37:49.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.471 Nvme0n1 : 4.00 15112.75 59.03 0.00 0.00 0.00 0.00 0.00 00:37:49.471 [2024-10-14T11:48:41.324Z] =================================================================================================================== 00:37:49.471 [2024-10-14T11:48:41.325Z] Total : 15112.75 59.03 0.00 0.00 0.00 0.00 0.00 00:37:49.472 00:37:50.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.843 Nvme0n1 : 5.00 15205.60 59.40 0.00 0.00 0.00 0.00 0.00 00:37:50.843 [2024-10-14T11:48:42.696Z] =================================================================================================================== 00:37:50.843 [2024-10-14T11:48:42.696Z] Total : 15205.60 59.40 0.00 0.00 0.00 0.00 0.00 00:37:50.843 00:37:51.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.776 Nvme0n1 : 6.00 15232.50 59.50 0.00 0.00 0.00 0.00 0.00 00:37:51.776 [2024-10-14T11:48:43.629Z] =================================================================================================================== 00:37:51.776 [2024-10-14T11:48:43.629Z] Total : 15232.50 59.50 0.00 0.00 0.00 0.00 0.00 00:37:51.776 00:37:52.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.709 Nvme0n1 : 7.00 15290.43 59.73 0.00 0.00 0.00 0.00 0.00 00:37:52.709 [2024-10-14T11:48:44.562Z] =================================================================================================================== 00:37:52.709 [2024-10-14T11:48:44.562Z] Total : 15290.43 59.73 0.00 0.00 0.00 0.00 0.00 00:37:52.709 00:37:53.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.640 Nvme0n1 : 8.00 15346.12 59.95 0.00 0.00 0.00 0.00 0.00 00:37:53.640 [2024-10-14T11:48:45.493Z] =================================================================================================================== 00:37:53.640 [2024-10-14T11:48:45.493Z] Total : 15346.12 59.95 0.00 0.00 0.00 0.00 0.00 00:37:53.640 00:37:54.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:54.573 Nvme0n1 : 9.00 15392.78 60.13 0.00 0.00 0.00 0.00 0.00 00:37:54.573 [2024-10-14T11:48:46.426Z] =================================================================================================================== 00:37:54.573 [2024-10-14T11:48:46.426Z] Total : 15392.78 60.13 0.00 0.00 0.00 0.00 0.00 00:37:54.573 00:37:55.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.506 Nvme0n1 : 10.00 15417.40 60.22 0.00 0.00 0.00 0.00 0.00 00:37:55.506 [2024-10-14T11:48:47.359Z] =================================================================================================================== 00:37:55.506 [2024-10-14T11:48:47.359Z] Total : 15417.40 60.22 0.00 0.00 0.00 0.00 0.00 00:37:55.506 00:37:55.506 00:37:55.506 Latency(us) 00:37:55.506 [2024-10-14T11:48:47.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.506 Nvme0n1 : 10.01 15421.74 60.24 0.00 0.00 8295.21 4369.07 18155.90 00:37:55.506 [2024-10-14T11:48:47.359Z] =================================================================================================================== 00:37:55.506 [2024-10-14T11:48:47.359Z] Total : 15421.74 60.24 0.00 0.00 8295.21 4369.07 18155.90 00:37:55.506 { 00:37:55.506 "results": [ 00:37:55.506 { 00:37:55.506 "job": "Nvme0n1", 00:37:55.506 "core_mask": "0x2", 00:37:55.506 "workload": "randwrite", 00:37:55.506 "status": "finished", 00:37:55.506 "queue_depth": 128, 00:37:55.506 "io_size": 4096, 00:37:55.506 "runtime": 10.005488, 00:37:55.506 "iops": 15421.736550980822, 00:37:55.506 "mibps": 60.24115840226884, 00:37:55.506 "io_failed": 0, 00:37:55.506 "io_timeout": 0, 00:37:55.506 "avg_latency_us": 8295.208384442823, 00:37:55.506 "min_latency_us": 4369.066666666667, 00:37:55.506 "max_latency_us": 18155.89925925926 00:37:55.506 } 00:37:55.506 ], 00:37:55.506 "core_count": 1 00:37:55.506 } 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 425106 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 425106 ']' 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 425106 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 425106 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 425106' 00:37:55.506 killing process with pid 425106 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 425106 00:37:55.506 Received shutdown signal, test time was about 10.000000 seconds 00:37:55.506 00:37:55.506 Latency(us) 00:37:55.506 [2024-10-14T11:48:47.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.506 [2024-10-14T11:48:47.359Z] =================================================================================================================== 00:37:55.506 [2024-10-14T11:48:47.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:55.506 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 425106 00:37:55.764 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:56.022 13:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:56.592 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:56.592 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 422633 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 422633 00:37:56.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 422633 Killed "${NVMF_APP[@]}" "$@" 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:56.850 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=426573 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 426573 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 426573 ']' 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:56.851 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:56.851 [2024-10-14 13:48:48.553020] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.851 [2024-10-14 13:48:48.554047] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:37:56.851 [2024-10-14 13:48:48.554098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.851 [2024-10-14 13:48:48.620782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.851 [2024-10-14 13:48:48.662092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.851 [2024-10-14 13:48:48.662160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.851 [2024-10-14 13:48:48.662174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.851 [2024-10-14 13:48:48.662185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.851 [2024-10-14 13:48:48.662194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.851 [2024-10-14 13:48:48.662709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.109 [2024-10-14 13:48:48.742848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.109 [2024-10-14 13:48:48.743144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.109 13:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:57.366 [2024-10-14 13:48:49.121695] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:57.366 [2024-10-14 13:48:49.121871] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:57.366 [2024-10-14 13:48:49.121936] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:57.366 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:57.366 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:57.367 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.624 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffecd671-8ba5-4f75-af1c-0b943e0d6603 -t 2000 00:37:57.882 [ 00:37:57.882 { 00:37:57.882 "name": "ffecd671-8ba5-4f75-af1c-0b943e0d6603", 00:37:57.882 "aliases": [ 00:37:57.882 "lvs/lvol" 00:37:57.882 ], 00:37:57.882 "product_name": "Logical Volume", 00:37:57.882 "block_size": 4096, 00:37:57.882 "num_blocks": 38912, 00:37:57.882 "uuid": "ffecd671-8ba5-4f75-af1c-0b943e0d6603", 00:37:57.882 "assigned_rate_limits": { 00:37:57.882 "rw_ios_per_sec": 0, 00:37:57.882 "rw_mbytes_per_sec": 0, 00:37:57.882 "r_mbytes_per_sec": 0, 00:37:57.882 "w_mbytes_per_sec": 0 00:37:57.882 }, 00:37:57.882 "claimed": false, 00:37:57.882 "zoned": false, 00:37:57.882 "supported_io_types": { 00:37:57.882 "read": true, 00:37:57.882 "write": true, 00:37:57.882 "unmap": true, 00:37:57.882 "flush": false, 00:37:57.882 "reset": true, 00:37:57.882 "nvme_admin": false, 00:37:57.882 "nvme_io": false, 00:37:57.882 "nvme_io_md": false, 00:37:57.882 "write_zeroes": true, 00:37:57.882 "zcopy": false, 00:37:57.882 "get_zone_info": false, 00:37:57.882 "zone_management": false, 00:37:57.882 "zone_append": false, 00:37:57.882 "compare": false, 00:37:57.882 "compare_and_write": false, 00:37:57.882 "abort": false, 00:37:57.882 "seek_hole": true, 00:37:57.882 "seek_data": true, 00:37:57.882 "copy": false, 00:37:57.882 "nvme_iov_md": false 00:37:57.882 }, 00:37:57.882 "driver_specific": { 00:37:57.882 "lvol": { 00:37:57.882 "lvol_store_uuid": "74db5065-76c8-452a-950a-f8e8a7f606fc", 00:37:57.882 "base_bdev": "aio_bdev", 00:37:57.882 "thin_provision": false, 00:37:57.882 "num_allocated_clusters": 38, 00:37:57.882 "snapshot": false, 00:37:57.882 "clone": false, 00:37:57.882 "esnap_clone": false 00:37:57.882 } 00:37:57.882 } 00:37:57.882 } 00:37:57.882 ] 00:37:57.882 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:57.882 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:57.882 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:58.140 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:58.140 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:58.140 13:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:58.397 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:58.397 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:58.656 [2024-10-14 13:48:50.495230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:58.915 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:59.173 request: 00:37:59.173 { 00:37:59.173 "uuid": "74db5065-76c8-452a-950a-f8e8a7f606fc", 00:37:59.173 "method": "bdev_lvol_get_lvstores", 00:37:59.173 "req_id": 1 00:37:59.173 } 00:37:59.173 Got JSON-RPC error response 00:37:59.173 response: 00:37:59.173 { 00:37:59.173 "code": -19, 00:37:59.173 "message": "No such device" 00:37:59.173 } 00:37:59.173 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:59.173 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:59.173 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:59.173 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:59.173 13:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:59.430 aio_bdev 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:59.430 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:59.688 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffecd671-8ba5-4f75-af1c-0b943e0d6603 -t 2000 00:37:59.945 [ 00:37:59.945 { 00:37:59.945 "name": "ffecd671-8ba5-4f75-af1c-0b943e0d6603", 00:37:59.945 "aliases": [ 00:37:59.945 "lvs/lvol" 00:37:59.945 ], 00:37:59.945 "product_name": "Logical Volume", 00:37:59.945 "block_size": 4096, 00:37:59.945 "num_blocks": 38912, 00:37:59.945 "uuid": "ffecd671-8ba5-4f75-af1c-0b943e0d6603", 00:37:59.945 "assigned_rate_limits": { 00:37:59.945 "rw_ios_per_sec": 0, 00:37:59.945 "rw_mbytes_per_sec": 0, 00:37:59.945 "r_mbytes_per_sec": 0, 00:37:59.945 "w_mbytes_per_sec": 0 00:37:59.945 }, 00:37:59.945 "claimed": false, 00:37:59.945 "zoned": false, 00:37:59.945 "supported_io_types": { 00:37:59.945 "read": true, 00:37:59.945 "write": true, 00:37:59.945 "unmap": true, 00:37:59.945 "flush": false, 00:37:59.945 "reset": true, 00:37:59.945 "nvme_admin": false, 00:37:59.945 "nvme_io": false, 00:37:59.945 "nvme_io_md": false, 00:37:59.945 "write_zeroes": true, 00:37:59.945 "zcopy": false, 00:37:59.945 "get_zone_info": false, 00:37:59.945 "zone_management": false, 00:37:59.945 "zone_append": false, 00:37:59.945 "compare": false, 00:37:59.945 "compare_and_write": false, 00:37:59.945 "abort": false, 00:37:59.945 "seek_hole": true, 00:37:59.945 "seek_data": true, 00:37:59.945 "copy": false, 00:37:59.945 "nvme_iov_md": false 00:37:59.945 }, 00:37:59.945 "driver_specific": { 00:37:59.945 "lvol": { 00:37:59.945 "lvol_store_uuid": "74db5065-76c8-452a-950a-f8e8a7f606fc", 00:37:59.945 "base_bdev": "aio_bdev", 00:37:59.945 "thin_provision": false, 00:37:59.945 "num_allocated_clusters": 38, 00:37:59.945 "snapshot": false, 00:37:59.945 "clone": false, 00:37:59.945 "esnap_clone": false 00:37:59.945 } 00:37:59.945 } 00:37:59.945 } 00:37:59.945 ] 00:37:59.945 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:37:59.945 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:37:59.945 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:00.203 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:00.203 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:38:00.203 13:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:00.461 13:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:00.461 13:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ffecd671-8ba5-4f75-af1c-0b943e0d6603 00:38:00.719 13:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74db5065-76c8-452a-950a-f8e8a7f606fc 00:38:00.976 13:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:01.234 00:38:01.234 real 0m19.698s 00:38:01.234 user 0m36.623s 00:38:01.234 sys 0m4.784s 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:01.234 ************************************ 00:38:01.234 END TEST lvs_grow_dirty 00:38:01.234 ************************************ 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:01.234 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:01.234 nvmf_trace.0 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:01.235 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:01.492 rmmod nvme_tcp 00:38:01.492 rmmod nvme_fabrics 00:38:01.492 rmmod nvme_keyring 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 426573 ']' 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 426573 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 426573 ']' 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 426573 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 426573 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 426573' 00:38:01.492 killing process with pid 426573 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 426573 00:38:01.492 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 426573 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.751 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.752 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.752 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.752 13:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.653 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.653 00:38:03.653 real 0m42.804s 00:38:03.653 user 0m55.585s 00:38:03.653 sys 0m8.577s 00:38:03.653 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:03.653 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:03.653 ************************************ 00:38:03.653 END TEST nvmf_lvs_grow 00:38:03.653 ************************************ 00:38:03.654 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:03.654 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:03.654 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:03.654 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.654 ************************************ 00:38:03.654 START TEST nvmf_bdev_io_wait 00:38:03.654 ************************************ 00:38:03.654 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:03.912 * Looking for test storage... 00:38:03.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.912 --rc genhtml_branch_coverage=1 00:38:03.912 --rc genhtml_function_coverage=1 00:38:03.912 --rc genhtml_legend=1 00:38:03.912 --rc geninfo_all_blocks=1 00:38:03.912 --rc geninfo_unexecuted_blocks=1 00:38:03.912 00:38:03.912 ' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.912 --rc genhtml_branch_coverage=1 00:38:03.912 --rc genhtml_function_coverage=1 00:38:03.912 --rc genhtml_legend=1 00:38:03.912 --rc geninfo_all_blocks=1 00:38:03.912 --rc geninfo_unexecuted_blocks=1 00:38:03.912 00:38:03.912 ' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.912 --rc genhtml_branch_coverage=1 00:38:03.912 --rc genhtml_function_coverage=1 00:38:03.912 --rc genhtml_legend=1 00:38:03.912 --rc geninfo_all_blocks=1 00:38:03.912 --rc geninfo_unexecuted_blocks=1 00:38:03.912 00:38:03.912 ' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:03.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.912 --rc genhtml_branch_coverage=1 00:38:03.912 --rc genhtml_function_coverage=1 00:38:03.912 --rc genhtml_legend=1 00:38:03.912 --rc geninfo_all_blocks=1 00:38:03.912 --rc geninfo_unexecuted_blocks=1 00:38:03.912 00:38:03.912 ' 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.912 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.913 13:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:06.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:06.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:06.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:06.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:38:06.445 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:06.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:06.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:38:06.446 00:38:06.446 --- 10.0.0.2 ping statistics --- 00:38:06.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:06.446 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:06.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:06.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:38:06.446 00:38:06.446 --- 10.0.0.1 ping statistics --- 00:38:06.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:06.446 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:06.446 13:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=429092 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 429092 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 429092 ']' 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.446 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.446 [2024-10-14 13:48:58.073644] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:06.446 [2024-10-14 13:48:58.074732] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:06.446 [2024-10-14 13:48:58.074801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.446 [2024-10-14 13:48:58.142939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:06.446 [2024-10-14 13:48:58.190298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:06.446 [2024-10-14 13:48:58.190358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:06.446 [2024-10-14 13:48:58.190373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:06.446 [2024-10-14 13:48:58.190385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:06.446 [2024-10-14 13:48:58.190394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:06.446 [2024-10-14 13:48:58.191972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.446 [2024-10-14 13:48:58.192112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:06.446 [2024-10-14 13:48:58.192180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:06.446 [2024-10-14 13:48:58.192184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.446 [2024-10-14 13:48:58.192701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 [2024-10-14 13:48:58.408555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:06.705 [2024-10-14 13:48:58.408780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:06.705 [2024-10-14 13:48:58.409758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:06.705 [2024-10-14 13:48:58.410638] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 [2024-10-14 13:48:58.416901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 Malloc0 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.705 [2024-10-14 13:48:58.473194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=429234 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=429236 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:06.705 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=429238 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:06.706 { 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme$subsystem", 00:38:06.706 "trtype": "$TEST_TRANSPORT", 00:38:06.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "$NVMF_PORT", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.706 "hdgst": ${hdgst:-false}, 00:38:06.706 "ddgst": ${ddgst:-false} 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 } 00:38:06.706 EOF 00:38:06.706 )") 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:06.706 { 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme$subsystem", 00:38:06.706 "trtype": "$TEST_TRANSPORT", 00:38:06.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "$NVMF_PORT", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.706 "hdgst": ${hdgst:-false}, 00:38:06.706 "ddgst": ${ddgst:-false} 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 } 00:38:06.706 EOF 00:38:06.706 )") 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=429240 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:06.706 { 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme$subsystem", 00:38:06.706 "trtype": "$TEST_TRANSPORT", 00:38:06.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "$NVMF_PORT", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.706 "hdgst": ${hdgst:-false}, 00:38:06.706 "ddgst": ${ddgst:-false} 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 } 00:38:06.706 EOF 00:38:06.706 )") 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:06.706 { 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme$subsystem", 00:38:06.706 "trtype": "$TEST_TRANSPORT", 00:38:06.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "$NVMF_PORT", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.706 "hdgst": ${hdgst:-false}, 00:38:06.706 "ddgst": ${ddgst:-false} 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 } 00:38:06.706 EOF 00:38:06.706 )") 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 429234 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme1", 00:38:06.706 "trtype": "tcp", 00:38:06.706 "traddr": "10.0.0.2", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "4420", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.706 "hdgst": false, 00:38:06.706 "ddgst": false 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 }' 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme1", 00:38:06.706 "trtype": "tcp", 00:38:06.706 "traddr": "10.0.0.2", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "4420", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.706 "hdgst": false, 00:38:06.706 "ddgst": false 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 }' 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme1", 00:38:06.706 "trtype": "tcp", 00:38:06.706 "traddr": "10.0.0.2", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "4420", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.706 "hdgst": false, 00:38:06.706 "ddgst": false 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 }' 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:06.706 13:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:06.706 "params": { 00:38:06.706 "name": "Nvme1", 00:38:06.706 "trtype": "tcp", 00:38:06.706 "traddr": "10.0.0.2", 00:38:06.706 "adrfam": "ipv4", 00:38:06.706 "trsvcid": "4420", 00:38:06.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.706 "hdgst": false, 00:38:06.706 "ddgst": false 00:38:06.706 }, 00:38:06.706 "method": "bdev_nvme_attach_controller" 00:38:06.706 }' 00:38:06.706 [2024-10-14 13:48:58.524863] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:06.707 [2024-10-14 13:48:58.524960] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:06.707 [2024-10-14 13:48:58.524985] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:06.707 [2024-10-14 13:48:58.524985] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:06.707 [2024-10-14 13:48:58.524985] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:06.707 [2024-10-14 13:48:58.525070] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 13:48:58.525070] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 13:48:58.525071] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:06.707 --proc-type=auto ] 00:38:06.707 --proc-type=auto ] 00:38:06.965 [2024-10-14 13:48:58.697917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.965 [2024-10-14 13:48:58.740378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:06.965 [2024-10-14 13:48:58.798506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.223 [2024-10-14 13:48:58.840812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:07.223 [2024-10-14 13:48:58.898399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.223 [2024-10-14 13:48:58.942909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:07.223 [2024-10-14 13:48:58.972106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.223 [2024-10-14 13:48:59.014448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:07.480 Running I/O for 1 seconds... 00:38:07.480 Running I/O for 1 seconds... 00:38:07.480 Running I/O for 1 seconds... 00:38:07.480 Running I/O for 1 seconds... 00:38:08.413 6190.00 IOPS, 24.18 MiB/s 00:38:08.413 Latency(us) 00:38:08.413 [2024-10-14T11:49:00.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.413 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:08.413 Nvme1n1 : 1.02 6217.09 24.29 0.00 0.00 20436.29 4320.52 37865.24 00:38:08.413 [2024-10-14T11:49:00.266Z] =================================================================================================================== 00:38:08.413 [2024-10-14T11:49:00.266Z] Total : 6217.09 24.29 0.00 0.00 20436.29 4320.52 37865.24 00:38:08.413 189768.00 IOPS, 741.28 MiB/s 00:38:08.413 Latency(us) 00:38:08.413 [2024-10-14T11:49:00.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.413 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:08.413 Nvme1n1 : 1.00 189407.60 739.87 0.00 0.00 672.18 309.48 1893.26 00:38:08.413 [2024-10-14T11:49:00.266Z] =================================================================================================================== 00:38:08.413 [2024-10-14T11:49:00.266Z] Total : 189407.60 739.87 0.00 0.00 672.18 309.48 1893.26 00:38:08.413 5866.00 IOPS, 22.91 MiB/s 00:38:08.413 Latency(us) 00:38:08.413 [2024-10-14T11:49:00.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.413 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:08.413 Nvme1n1 : 1.01 5963.14 23.29 0.00 0.00 21378.79 5922.51 36311.80 00:38:08.413 [2024-10-14T11:49:00.266Z] =================================================================================================================== 00:38:08.413 [2024-10-14T11:49:00.266Z] Total : 5963.14 23.29 0.00 0.00 21378.79 5922.51 36311.80 00:38:08.671 7728.00 IOPS, 30.19 MiB/s 00:38:08.671 Latency(us) 00:38:08.671 [2024-10-14T11:49:00.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.671 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:08.671 Nvme1n1 : 1.01 7771.89 30.36 0.00 0.00 16374.76 4781.70 21845.33 00:38:08.671 [2024-10-14T11:49:00.524Z] =================================================================================================================== 00:38:08.671 [2024-10-14T11:49:00.524Z] Total : 7771.89 30.36 0.00 0.00 16374.76 4781.70 21845.33 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 429236 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 429238 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 429240 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.671 rmmod nvme_tcp 00:38:08.671 rmmod nvme_fabrics 00:38:08.671 rmmod nvme_keyring 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 429092 ']' 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 429092 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 429092 ']' 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 429092 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:08.671 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 429092 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 429092' 00:38:08.930 killing process with pid 429092 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 429092 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 429092 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:08.930 13:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:11.457 00:38:11.457 real 0m7.319s 00:38:11.457 user 0m14.181s 00:38:11.457 sys 0m4.068s 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:11.457 ************************************ 00:38:11.457 END TEST nvmf_bdev_io_wait 00:38:11.457 ************************************ 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:11.457 ************************************ 00:38:11.457 START TEST nvmf_queue_depth 00:38:11.457 ************************************ 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:11.457 * Looking for test storage... 00:38:11.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:38:11.457 13:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.457 --rc genhtml_branch_coverage=1 00:38:11.457 --rc genhtml_function_coverage=1 00:38:11.457 --rc genhtml_legend=1 00:38:11.457 --rc geninfo_all_blocks=1 00:38:11.457 --rc geninfo_unexecuted_blocks=1 00:38:11.457 00:38:11.457 ' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.457 --rc genhtml_branch_coverage=1 00:38:11.457 --rc genhtml_function_coverage=1 00:38:11.457 --rc genhtml_legend=1 00:38:11.457 --rc geninfo_all_blocks=1 00:38:11.457 --rc geninfo_unexecuted_blocks=1 00:38:11.457 00:38:11.457 ' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.457 --rc genhtml_branch_coverage=1 00:38:11.457 --rc genhtml_function_coverage=1 00:38:11.457 --rc genhtml_legend=1 00:38:11.457 --rc geninfo_all_blocks=1 00:38:11.457 --rc geninfo_unexecuted_blocks=1 00:38:11.457 00:38:11.457 ' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:11.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:11.457 --rc genhtml_branch_coverage=1 00:38:11.457 --rc genhtml_function_coverage=1 00:38:11.457 --rc genhtml_legend=1 00:38:11.457 --rc geninfo_all_blocks=1 00:38:11.457 --rc geninfo_unexecuted_blocks=1 00:38:11.457 00:38:11.457 ' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:11.457 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:11.458 13:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:13.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:13.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:13.360 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:13.360 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.360 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.361 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:38:13.619 00:38:13.619 --- 10.0.0.2 ping statistics --- 00:38:13.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.619 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:38:13.619 00:38:13.619 --- 10.0.0.1 ping statistics --- 00:38:13.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.619 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=431453 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 431453 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 431453 ']' 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.619 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.619 [2024-10-14 13:49:05.329502] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:13.619 [2024-10-14 13:49:05.330655] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:13.619 [2024-10-14 13:49:05.330718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.619 [2024-10-14 13:49:05.400945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.619 [2024-10-14 13:49:05.447271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.619 [2024-10-14 13:49:05.447326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.619 [2024-10-14 13:49:05.447355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.619 [2024-10-14 13:49:05.447368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.619 [2024-10-14 13:49:05.447378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.619 [2024-10-14 13:49:05.447935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.878 [2024-10-14 13:49:05.530595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.878 [2024-10-14 13:49:05.530928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 [2024-10-14 13:49:05.584554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 Malloc0 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.878 [2024-10-14 13:49:05.644682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=431485 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 431485 /var/tmp/bdevperf.sock 00:38:13.878 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 431485 ']' 00:38:13.879 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:13.879 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.879 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:13.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:13.879 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.879 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.879 [2024-10-14 13:49:05.696422] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:13.879 [2024-10-14 13:49:05.696506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431485 ] 00:38:14.137 [2024-10-14 13:49:05.759764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.137 [2024-10-14 13:49:05.808765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.137 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:14.137 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:14.137 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:14.137 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.137 13:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:14.394 NVMe0n1 00:38:14.394 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.394 13:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:14.652 Running I/O for 10 seconds... 00:38:16.520 8426.00 IOPS, 32.91 MiB/s [2024-10-14T11:49:09.316Z] 8704.00 IOPS, 34.00 MiB/s [2024-10-14T11:49:10.689Z] 8561.00 IOPS, 33.44 MiB/s [2024-10-14T11:49:11.622Z] 8661.00 IOPS, 33.83 MiB/s [2024-10-14T11:49:12.556Z] 8603.20 IOPS, 33.61 MiB/s [2024-10-14T11:49:13.491Z] 8602.33 IOPS, 33.60 MiB/s [2024-10-14T11:49:14.423Z] 8631.71 IOPS, 33.72 MiB/s [2024-10-14T11:49:15.355Z] 8671.00 IOPS, 33.87 MiB/s [2024-10-14T11:49:16.729Z] 8657.00 IOPS, 33.82 MiB/s [2024-10-14T11:49:16.729Z] 8697.30 IOPS, 33.97 MiB/s 00:38:24.876 Latency(us) 00:38:24.876 [2024-10-14T11:49:16.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.876 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:24.876 Verification LBA range: start 0x0 length 0x4000 00:38:24.876 NVMe0n1 : 10.10 8711.01 34.03 0.00 0.00 117062.62 21845.33 67963.26 00:38:24.876 [2024-10-14T11:49:16.729Z] =================================================================================================================== 00:38:24.876 [2024-10-14T11:49:16.729Z] Total : 8711.01 34.03 0.00 0.00 117062.62 21845.33 67963.26 00:38:24.876 { 00:38:24.876 "results": [ 00:38:24.876 { 00:38:24.876 "job": "NVMe0n1", 00:38:24.876 "core_mask": "0x1", 00:38:24.876 "workload": "verify", 00:38:24.876 "status": "finished", 00:38:24.876 "verify_range": { 00:38:24.876 "start": 0, 00:38:24.876 "length": 16384 00:38:24.876 }, 00:38:24.876 "queue_depth": 1024, 00:38:24.876 "io_size": 4096, 00:38:24.876 "runtime": 10.100896, 00:38:24.876 "iops": 8711.009399562177, 00:38:24.876 "mibps": 34.027380467039755, 00:38:24.876 "io_failed": 0, 00:38:24.876 "io_timeout": 0, 00:38:24.876 "avg_latency_us": 117062.61783786946, 00:38:24.876 "min_latency_us": 21845.333333333332, 00:38:24.876 "max_latency_us": 67963.25925925926 00:38:24.876 } 00:38:24.876 ], 00:38:24.876 "core_count": 1 00:38:24.876 } 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 431485 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 431485 ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 431485 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431485 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431485' 00:38:24.876 killing process with pid 431485 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 431485 00:38:24.876 Received shutdown signal, test time was about 10.000000 seconds 00:38:24.876 00:38:24.876 Latency(us) 00:38:24.876 [2024-10-14T11:49:16.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.876 [2024-10-14T11:49:16.729Z] =================================================================================================================== 00:38:24.876 [2024-10-14T11:49:16.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 431485 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.876 rmmod nvme_tcp 00:38:24.876 rmmod nvme_fabrics 00:38:24.876 rmmod nvme_keyring 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 431453 ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 431453 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 431453 ']' 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 431453 00:38:24.876 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 431453 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 431453' 00:38:25.135 killing process with pid 431453 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 431453 00:38:25.135 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 431453 00:38:25.394 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:25.394 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:25.394 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:25.394 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:25.394 13:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.394 13:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.294 00:38:27.294 real 0m16.197s 00:38:27.294 user 0m22.345s 00:38:27.294 sys 0m3.448s 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.294 ************************************ 00:38:27.294 END TEST nvmf_queue_depth 00:38:27.294 ************************************ 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.294 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:27.294 ************************************ 00:38:27.294 START TEST nvmf_target_multipath 00:38:27.294 ************************************ 00:38:27.295 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:27.553 * Looking for test storage... 00:38:27.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:27.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.553 --rc genhtml_branch_coverage=1 00:38:27.553 --rc genhtml_function_coverage=1 00:38:27.553 --rc genhtml_legend=1 00:38:27.553 --rc geninfo_all_blocks=1 00:38:27.553 --rc geninfo_unexecuted_blocks=1 00:38:27.553 00:38:27.553 ' 00:38:27.553 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:27.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.553 --rc genhtml_branch_coverage=1 00:38:27.553 --rc genhtml_function_coverage=1 00:38:27.553 --rc genhtml_legend=1 00:38:27.554 --rc geninfo_all_blocks=1 00:38:27.554 --rc geninfo_unexecuted_blocks=1 00:38:27.554 00:38:27.554 ' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.554 --rc genhtml_branch_coverage=1 00:38:27.554 --rc genhtml_function_coverage=1 00:38:27.554 --rc genhtml_legend=1 00:38:27.554 --rc geninfo_all_blocks=1 00:38:27.554 --rc geninfo_unexecuted_blocks=1 00:38:27.554 00:38:27.554 ' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.554 --rc genhtml_branch_coverage=1 00:38:27.554 --rc genhtml_function_coverage=1 00:38:27.554 --rc genhtml_legend=1 00:38:27.554 --rc geninfo_all_blocks=1 00:38:27.554 --rc geninfo_unexecuted_blocks=1 00:38:27.554 00:38:27.554 ' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:27.554 13:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:30.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:30.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:30.086 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:30.086 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.086 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:30.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:38:30.087 00:38:30.087 --- 10.0.0.2 ping statistics --- 00:38:30.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.087 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:30.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:38:30.087 00:38:30.087 --- 10.0.0.1 ping statistics --- 00:38:30.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.087 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:30.087 only one NIC for nvmf test 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:30.087 rmmod nvme_tcp 00:38:30.087 rmmod nvme_fabrics 00:38:30.087 rmmod nvme_keyring 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:30.087 13:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:31.994 00:38:31.994 real 0m4.560s 00:38:31.994 user 0m0.929s 00:38:31.994 sys 0m1.595s 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:31.994 ************************************ 00:38:31.994 END TEST nvmf_target_multipath 00:38:31.994 ************************************ 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:31.994 ************************************ 00:38:31.994 START TEST nvmf_zcopy 00:38:31.994 ************************************ 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:31.994 * Looking for test storage... 00:38:31.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:38:31.994 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.253 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:32.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.253 --rc genhtml_branch_coverage=1 00:38:32.253 --rc genhtml_function_coverage=1 00:38:32.253 --rc genhtml_legend=1 00:38:32.254 --rc geninfo_all_blocks=1 00:38:32.254 --rc geninfo_unexecuted_blocks=1 00:38:32.254 00:38:32.254 ' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.254 --rc genhtml_branch_coverage=1 00:38:32.254 --rc genhtml_function_coverage=1 00:38:32.254 --rc genhtml_legend=1 00:38:32.254 --rc geninfo_all_blocks=1 00:38:32.254 --rc geninfo_unexecuted_blocks=1 00:38:32.254 00:38:32.254 ' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.254 --rc genhtml_branch_coverage=1 00:38:32.254 --rc genhtml_function_coverage=1 00:38:32.254 --rc genhtml_legend=1 00:38:32.254 --rc geninfo_all_blocks=1 00:38:32.254 --rc geninfo_unexecuted_blocks=1 00:38:32.254 00:38:32.254 ' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:32.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.254 --rc genhtml_branch_coverage=1 00:38:32.254 --rc genhtml_function_coverage=1 00:38:32.254 --rc genhtml_legend=1 00:38:32.254 --rc geninfo_all_blocks=1 00:38:32.254 --rc geninfo_unexecuted_blocks=1 00:38:32.254 00:38:32.254 ' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:32.254 13:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:34.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:34.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:34.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:34.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:34.200 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:34.201 13:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:34.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:34.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:38:34.499 00:38:34.499 --- 10.0.0.2 ping statistics --- 00:38:34.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.499 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:34.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:34.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:38:34.499 00:38:34.499 --- 10.0.0.1 ping statistics --- 00:38:34.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.499 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=436658 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 436658 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 436658 ']' 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:34.499 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.499 [2024-10-14 13:49:26.193367] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:34.499 [2024-10-14 13:49:26.194456] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:34.499 [2024-10-14 13:49:26.194522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.499 [2024-10-14 13:49:26.258197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.499 [2024-10-14 13:49:26.303981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.499 [2024-10-14 13:49:26.304035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.499 [2024-10-14 13:49:26.304063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.499 [2024-10-14 13:49:26.304075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.499 [2024-10-14 13:49:26.304084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.499 [2024-10-14 13:49:26.304670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.782 [2024-10-14 13:49:26.389842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.782 [2024-10-14 13:49:26.390157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.782 [2024-10-14 13:49:26.445268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.782 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.783 [2024-10-14 13:49:26.461447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.783 malloc0 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:34.783 { 00:38:34.783 "params": { 00:38:34.783 "name": "Nvme$subsystem", 00:38:34.783 "trtype": "$TEST_TRANSPORT", 00:38:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.783 "adrfam": "ipv4", 00:38:34.783 "trsvcid": "$NVMF_PORT", 00:38:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.783 "hdgst": ${hdgst:-false}, 00:38:34.783 "ddgst": ${ddgst:-false} 00:38:34.783 }, 00:38:34.783 "method": "bdev_nvme_attach_controller" 00:38:34.783 } 00:38:34.783 EOF 00:38:34.783 )") 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:34.783 13:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:34.783 "params": { 00:38:34.783 "name": "Nvme1", 00:38:34.783 "trtype": "tcp", 00:38:34.783 "traddr": "10.0.0.2", 00:38:34.783 "adrfam": "ipv4", 00:38:34.783 "trsvcid": "4420", 00:38:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.783 "hdgst": false, 00:38:34.783 "ddgst": false 00:38:34.783 }, 00:38:34.783 "method": "bdev_nvme_attach_controller" 00:38:34.783 }' 00:38:34.783 [2024-10-14 13:49:26.543380] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:34.783 [2024-10-14 13:49:26.543478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436687 ] 00:38:34.783 [2024-10-14 13:49:26.603880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.041 [2024-10-14 13:49:26.657799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.299 Running I/O for 10 seconds... 00:38:37.167 5468.00 IOPS, 42.72 MiB/s [2024-10-14T11:49:29.954Z] 5486.50 IOPS, 42.86 MiB/s [2024-10-14T11:49:31.328Z] 5517.00 IOPS, 43.10 MiB/s [2024-10-14T11:49:32.262Z] 5505.75 IOPS, 43.01 MiB/s [2024-10-14T11:49:33.222Z] 5523.60 IOPS, 43.15 MiB/s [2024-10-14T11:49:34.156Z] 5517.17 IOPS, 43.10 MiB/s [2024-10-14T11:49:35.089Z] 5525.71 IOPS, 43.17 MiB/s [2024-10-14T11:49:36.023Z] 5521.88 IOPS, 43.14 MiB/s [2024-10-14T11:49:36.956Z] 5523.67 IOPS, 43.15 MiB/s [2024-10-14T11:49:37.215Z] 5521.90 IOPS, 43.14 MiB/s 00:38:45.362 Latency(us) 00:38:45.362 [2024-10-14T11:49:37.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.362 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:45.362 Verification LBA range: start 0x0 length 0x1000 00:38:45.362 Nvme1n1 : 10.02 5526.01 43.17 0.00 0.00 23101.49 494.55 30098.01 00:38:45.362 [2024-10-14T11:49:37.215Z] =================================================================================================================== 00:38:45.362 [2024-10-14T11:49:37.215Z] Total : 5526.01 43.17 0.00 0.00 23101.49 494.55 30098.01 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=437868 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:45.362 { 00:38:45.362 "params": { 00:38:45.362 "name": "Nvme$subsystem", 00:38:45.362 "trtype": "$TEST_TRANSPORT", 00:38:45.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.362 "adrfam": "ipv4", 00:38:45.362 "trsvcid": "$NVMF_PORT", 00:38:45.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.362 "hdgst": ${hdgst:-false}, 00:38:45.362 "ddgst": ${ddgst:-false} 00:38:45.362 }, 00:38:45.362 "method": "bdev_nvme_attach_controller" 00:38:45.362 } 00:38:45.362 EOF 00:38:45.362 )") 00:38:45.362 [2024-10-14 13:49:37.141233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.141273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:45.362 13:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:45.362 "params": { 00:38:45.362 "name": "Nvme1", 00:38:45.362 "trtype": "tcp", 00:38:45.362 "traddr": "10.0.0.2", 00:38:45.362 "adrfam": "ipv4", 00:38:45.362 "trsvcid": "4420", 00:38:45.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:45.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:45.362 "hdgst": false, 00:38:45.362 "ddgst": false 00:38:45.362 }, 00:38:45.362 "method": "bdev_nvme_attach_controller" 00:38:45.362 }' 00:38:45.362 [2024-10-14 13:49:37.149145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.149191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.157142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.157163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.165140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.165161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.173141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.173161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.181138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.181158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.181574] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:38:45.362 [2024-10-14 13:49:37.181642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437868 ] 00:38:45.362 [2024-10-14 13:49:37.189140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.189161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.197137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.197157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.205138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.205158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.362 [2024-10-14 13:49:37.213151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.362 [2024-10-14 13:49:37.213173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.221142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.221163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.229140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.229160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.237138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.237158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.243322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.621 [2024-10-14 13:49:37.245145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.245165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.253201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.253248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.261167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.261199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.269142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.269163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.277141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.277162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.285140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.285160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.292368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.621 [2024-10-14 13:49:37.293138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.293157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.301138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.301158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.309168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.309212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.317190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.317224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.325200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.325234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.333195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.333232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.341181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.341234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.349203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.349241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.357153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.357178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.365159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.365201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.373178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.373212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.381180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.381217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.389151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.389173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.397145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.397182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.405159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.405199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.413146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.621 [2024-10-14 13:49:37.413168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.621 [2024-10-14 13:49:37.421154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.421186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.429149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.429172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.437145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.437167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.445152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.445190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.453144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.453165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.461143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.461164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.622 [2024-10-14 13:49:37.469142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.622 [2024-10-14 13:49:37.469163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.477146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.477169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.485162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.485210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.493375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.493399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.501143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.501169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.509147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.509170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 Running I/O for 5 seconds... 00:38:45.880 [2024-10-14 13:49:37.524734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.524760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.536460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.536500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.548035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.548061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.562662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.562687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.572826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.572850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.585325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.585352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.596263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.596289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.607707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.607732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.623034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.623059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.633326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.633351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.645585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.645610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.656817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.656840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.668166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.668191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.681522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.681546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.691886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.691909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.704080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.704104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.717569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.717595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.880 [2024-10-14 13:49:37.727196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.880 [2024-10-14 13:49:37.727222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.742395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.742435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.753183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.753209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.764088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.764136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.777254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.777280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.786903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.786929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.798884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.798909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.810087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.810125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.821278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.821304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.832175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.832201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.847429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.138 [2024-10-14 13:49:37.847453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.138 [2024-10-14 13:49:37.863298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.863325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.877764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.877790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.887324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.887348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.899617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.899655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.910737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.910760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.922139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.922171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.933488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.933512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.944871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.944895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.959025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.959050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.969230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.969261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.980574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.980599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.139 [2024-10-14 13:49:37.991413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.139 [2024-10-14 13:49:37.991439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.004976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.005001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.015303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.015329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.026950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.026988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.038005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.038030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.049075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.049100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.060207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.060233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.076530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.076554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.089182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.089209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.099233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.099259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.114084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.114126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.124713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.124737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.136548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.136573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.151933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.151959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.162344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.162370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.174331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.174358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.185704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.185729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.197038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.197066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.208308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.208334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.219841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.219867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.235604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.235629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.397 [2024-10-14 13:49:38.251558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.397 [2024-10-14 13:49:38.251593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.267387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.267423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.283336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.283362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.298137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.298164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.307818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.307844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.319971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.319996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.334904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.334931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.344101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.344126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.356139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.356165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.370027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.370053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.380505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.380528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.392445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.392484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.405348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.405373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.415005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.415031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.427048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.427072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.438243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.438269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.454272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.454298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.464213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.464238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.476274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.476301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.487638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.487665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.655 [2024-10-14 13:49:38.502296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.655 [2024-10-14 13:49:38.502331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.511686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.511722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 11266.00 IOPS, 88.02 MiB/s [2024-10-14T11:49:38.767Z] [2024-10-14 13:49:38.524234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.524260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.539114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.539148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.548263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.548289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.560618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.560642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.573578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.573604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.582676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.582703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.599250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.599277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.615376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.615402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.630641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.630667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.639694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.639720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.651835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.651860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.668023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.668063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.681612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.681638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.691155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.691216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.706159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.706185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.716961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.717000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.728206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.728232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.742420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.742446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.752379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.752419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.914 [2024-10-14 13:49:38.764763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.914 [2024-10-14 13:49:38.764789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.777753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.777778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.787121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.787156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.802197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.802224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.812974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.813000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.825070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.825095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.836378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.836405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.849731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.849756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.859920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.859945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.872012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.872036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.884654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.884681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.894345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.894370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.910664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.910705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.922104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.922152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.933022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.933050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.944693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.944719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.959148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.959174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.973067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.973092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.983143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.983168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:38.998351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:38.998376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:39.008866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:39.008892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.172 [2024-10-14 13:49:39.020575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.172 [2024-10-14 13:49:39.020601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.032466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.032491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.045733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.045759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.055535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.055561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.068231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.068257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.082539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.082566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.092852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.092877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.104653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.104679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.118493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.118519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.128313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.128340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.140260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.140286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.154648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.154690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.164615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.164641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.176918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.176944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.188443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.188468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.202108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.202143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.211703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.211728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.223614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.223637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.234940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.234965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.245583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.245609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.256778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.256805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.430 [2024-10-14 13:49:39.268244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.430 [2024-10-14 13:49:39.268270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.431 [2024-10-14 13:49:39.284069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.431 [2024-10-14 13:49:39.284095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.689 [2024-10-14 13:49:39.295215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.689 [2024-10-14 13:49:39.295240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.306877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.306916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.317843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.317867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.333963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.333988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.343683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.343708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.355645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.355671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.370137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.370163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.379861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.379886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.391841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.391867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.406083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.406123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.415239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.415264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.430210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.430236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.441166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.441192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.452493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.452532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.465642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.465665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.475755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.475795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.487589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.487612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.503284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.503310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 11321.00 IOPS, 88.45 MiB/s [2024-10-14T11:49:39.543Z] [2024-10-14 13:49:39.518228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.518254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.528274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.528298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.690 [2024-10-14 13:49:39.540571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.690 [2024-10-14 13:49:39.540597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.552298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.552324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.565943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.565968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.575871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.575895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.588326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.588351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.602491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.602516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.613001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.613025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.625136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.625163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.635831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.635856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.650428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.650478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.660000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.660026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.672256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.672283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.685662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.685687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.695417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.695455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.709580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.709606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.719185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.719211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.734771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.734797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.745667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.745692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.756859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.756884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.769902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.769927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.779626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.779649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.948 [2024-10-14 13:49:39.791734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.948 [2024-10-14 13:49:39.791759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.805251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.805284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.815037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.815062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.830199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.830225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.841380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.841419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.852207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.852234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.865049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.865074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.874779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.874810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.889737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.889762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.900675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.900700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.206 [2024-10-14 13:49:39.914198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.206 [2024-10-14 13:49:39.914222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.923625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.923650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.936018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.936041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.949431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.949457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.959109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.959156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.974584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.974622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.985087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.985126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:39.996790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:39.996813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:40.008000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:40.008041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:40.020081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:40.020108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:40.033711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:40.033752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:40.043389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:40.043417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.207 [2024-10-14 13:49:40.056023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.207 [2024-10-14 13:49:40.056051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.069355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.069382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.079768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.079793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.092110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.092144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.107527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.107561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.122421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.122462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.131859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.131884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.143947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.143973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.157515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.157540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.167214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.167240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.182747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.182771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.193542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.193581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.204941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.204964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.216667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.216691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.229677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.229702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.239242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.239270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.254619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.254642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.265362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.265386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.275957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.275983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.290976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.291000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.300711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.300736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.465 [2024-10-14 13:49:40.313400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.465 [2024-10-14 13:49:40.313438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.324394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.324420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.337640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.337666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.347056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.347082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.362552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.362577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.373603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.373629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.385491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.385516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.396441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.396480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.410515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.410541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.420585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.420609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.432574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.432599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.445780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.445806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.455881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.455904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.468489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.468528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.479748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.479774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.490876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.490902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.502065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.502090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.513141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.513166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 11307.00 IOPS, 88.34 MiB/s [2024-10-14T11:49:40.577Z] [2024-10-14 13:49:40.524282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.524307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.539099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.539124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.554184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.554210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.563828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.563851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.724 [2024-10-14 13:49:40.575543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.724 [2024-10-14 13:49:40.575569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.982 [2024-10-14 13:49:40.589660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.589686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.599709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.599735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.612079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.612108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.625452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.625495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.635221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.635247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.650529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.650555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.660470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.660509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.672478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.672504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.687320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.687345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.697059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.697082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.709176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.709202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.719995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.720022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.731451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.731474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.742333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.742359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.753455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.753480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.764771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.764797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.776398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.776448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.789853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.789878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.798988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.799014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.811379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.811405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.983 [2024-10-14 13:49:40.827177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.983 [2024-10-14 13:49:40.827204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.841911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.841947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.851519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.851544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.864365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.864391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.879697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.879722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.894272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.894298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.903980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.904005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.916829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.916852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.927572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.927597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.941387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.941427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.951512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.951537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.963484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.963511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.979575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.979614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:40.993490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:40.993530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:41.002980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.241 [2024-10-14 13:49:41.003004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.241 [2024-10-14 13:49:41.018491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.018538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.029608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.029645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.039263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.039289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.051286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.051312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.062918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.062941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.073452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.073477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.242 [2024-10-14 13:49:41.090573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.242 [2024-10-14 13:49:41.090597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.100440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.100480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.112450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.112473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.128184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.128209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.138212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.138238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.150007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.150032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.161083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.161142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.172893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.172920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.184065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.184091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.198120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.198173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.207555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.207579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.219405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.219439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.233973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.234011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.243741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.243783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.255720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.255745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.269004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.269029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.278632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.278656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.290471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.290496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.301678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.301701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.311367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.311391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.326019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.326045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.336993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.337017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.500 [2024-10-14 13:49:41.348961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.500 [2024-10-14 13:49:41.348986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.360392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.360418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.374808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.374833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.384767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.384791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.397152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.397193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.407711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.407736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.420886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.420909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.430075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.430100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.442286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.442312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.453786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.453811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.465761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.465795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.476335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.476361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.488501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.488539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.501733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.501758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.511099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.511147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 11314.50 IOPS, 88.39 MiB/s [2024-10-14T11:49:41.611Z] [2024-10-14 13:49:41.526194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.526218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.536850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.536873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.547645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.547668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.561648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.561688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.571774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.571797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.583975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.583999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.597173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.597199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.758 [2024-10-14 13:49:41.607256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.758 [2024-10-14 13:49:41.607280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.622711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.622735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.633191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.633217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.644715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.644737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.657288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.657314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.666596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.666621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.679271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.679295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.690657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.690681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.701580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.701603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.711519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.711545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.723658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.723682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.736822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.736847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.747198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.747222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.762092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.762118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.773032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.773059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.784851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.784874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.798421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.798446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.809198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.809224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.820522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.820562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.832137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.832163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.848146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.848170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.860770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.860810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.017 [2024-10-14 13:49:41.870389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.017 [2024-10-14 13:49:41.870430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.275 [2024-10-14 13:49:41.886811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.275 [2024-10-14 13:49:41.886836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.897664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.897689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.908880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.908904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.920193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.920220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.934420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.934467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.944407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.944434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.956800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.956825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.968014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.968039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.982542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.982582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:41.992894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:41.992917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.004945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.004972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.018168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.018196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.027627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.027652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.039796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.039820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.051067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.051091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.062295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.062322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.073526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.073566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.084591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.084617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.098072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.098097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.107857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.107880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.276 [2024-10-14 13:49:42.120281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.276 [2024-10-14 13:49:42.120307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.133290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.133316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.143789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.143813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.155923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.155948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.171444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.171483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.185512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.185538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.194997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.195022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.210161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.210193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.226963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.226986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.236442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.236482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.248523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.248548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.261550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.261576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.270952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.270977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.286312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.286337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.297013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.297036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.307937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.307962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.322712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.322737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.331964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.331989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.344201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.344227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.358048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.358073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.367507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.367541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.534 [2024-10-14 13:49:42.379481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.534 [2024-10-14 13:49:42.379520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.394396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.394435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.404253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.404279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.416515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.416555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.430572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.430597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.441360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.441384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.453798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.453822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.464273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.464299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.476310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.476336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.489061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.489087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.499194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.499219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.792 [2024-10-14 13:49:42.514432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.792 [2024-10-14 13:49:42.514474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 11314.00 IOPS, 88.39 MiB/s [2024-10-14T11:49:42.646Z] [2024-10-14 13:49:42.525646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.525671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.533171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.533197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 00:38:50.793 Latency(us) 00:38:50.793 [2024-10-14T11:49:42.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.793 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:50.793 Nvme1n1 : 5.01 11315.53 88.40 0.00 0.00 11297.78 3106.89 19320.98 00:38:50.793 [2024-10-14T11:49:42.646Z] =================================================================================================================== 00:38:50.793 [2024-10-14T11:49:42.646Z] Total : 11315.53 88.40 0.00 0.00 11297.78 3106.89 19320.98 00:38:50.793 [2024-10-14 13:49:42.541145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.541169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.549146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.549178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.557198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.557245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.565222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.565268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.573196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.573237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.581187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.581231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.589188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.589241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.597202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.597257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.605200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.605252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.613198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.613249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.621199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.621250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.629195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.629241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.637193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.637240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.793 [2024-10-14 13:49:42.645213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.793 [2024-10-14 13:49:42.645260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.653195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.653240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.661188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.661233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.669186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.669231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.677158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.677201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.685149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.685188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.693190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.693236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.701190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.701251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.709179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.709215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.717150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.717173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 [2024-10-14 13:49:42.725144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:51.051 [2024-10-14 13:49:42.725164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:51.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (437868) - No such process 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 437868 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.051 delay0 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.051 13:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:51.051 [2024-10-14 13:49:42.836070] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:59.158 Initializing NVMe Controllers 00:38:59.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:59.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:59.158 Initialization complete. Launching workers. 00:38:59.158 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 20986 00:38:59.158 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21102, failed to submit 122 00:38:59.158 success 21020, unsuccessful 82, failed 0 00:38:59.158 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:59.158 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:59.158 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:59.158 13:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:59.158 rmmod nvme_tcp 00:38:59.158 rmmod nvme_fabrics 00:38:59.158 rmmod nvme_keyring 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 436658 ']' 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 436658 00:38:59.158 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 436658 ']' 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 436658 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 436658 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 436658' 00:38:59.159 killing process with pid 436658 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 436658 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 436658 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.159 13:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:00.541 00:39:00.541 real 0m28.615s 00:39:00.541 user 0m40.626s 00:39:00.541 sys 0m10.041s 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.541 ************************************ 00:39:00.541 END TEST nvmf_zcopy 00:39:00.541 ************************************ 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:00.541 ************************************ 00:39:00.541 START TEST nvmf_nmic 00:39:00.541 ************************************ 00:39:00.541 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:00.799 * Looking for test storage... 00:39:00.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:00.799 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:00.799 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:39:00.799 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.800 --rc genhtml_branch_coverage=1 00:39:00.800 --rc genhtml_function_coverage=1 00:39:00.800 --rc genhtml_legend=1 00:39:00.800 --rc geninfo_all_blocks=1 00:39:00.800 --rc geninfo_unexecuted_blocks=1 00:39:00.800 00:39:00.800 ' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.800 --rc genhtml_branch_coverage=1 00:39:00.800 --rc genhtml_function_coverage=1 00:39:00.800 --rc genhtml_legend=1 00:39:00.800 --rc geninfo_all_blocks=1 00:39:00.800 --rc geninfo_unexecuted_blocks=1 00:39:00.800 00:39:00.800 ' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.800 --rc genhtml_branch_coverage=1 00:39:00.800 --rc genhtml_function_coverage=1 00:39:00.800 --rc genhtml_legend=1 00:39:00.800 --rc geninfo_all_blocks=1 00:39:00.800 --rc geninfo_unexecuted_blocks=1 00:39:00.800 00:39:00.800 ' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.800 --rc genhtml_branch_coverage=1 00:39:00.800 --rc genhtml_function_coverage=1 00:39:00.800 --rc genhtml_legend=1 00:39:00.800 --rc geninfo_all_blocks=1 00:39:00.800 --rc geninfo_unexecuted_blocks=1 00:39:00.800 00:39:00.800 ' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:00.800 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:00.801 13:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:02.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:02.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.716 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:02.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:02.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:02.717 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:02.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:39:02.976 00:39:02.976 --- 10.0.0.2 ping statistics --- 00:39:02.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.976 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:02.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:39:02.976 00:39:02.976 --- 10.0.0.1 ping statistics --- 00:39:02.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.976 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=441303 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 441303 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 441303 ']' 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.976 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:02.976 [2024-10-14 13:49:54.737568] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:02.976 [2024-10-14 13:49:54.738714] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:02.976 [2024-10-14 13:49:54.738780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.976 [2024-10-14 13:49:54.806506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:03.234 [2024-10-14 13:49:54.855068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:03.234 [2024-10-14 13:49:54.855124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:03.234 [2024-10-14 13:49:54.855159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:03.235 [2024-10-14 13:49:54.855171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:03.235 [2024-10-14 13:49:54.855181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:03.235 [2024-10-14 13:49:54.856660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.235 [2024-10-14 13:49:54.856714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:03.235 [2024-10-14 13:49:54.856782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.235 [2024-10-14 13:49:54.856785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.235 [2024-10-14 13:49:54.938604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:03.235 [2024-10-14 13:49:54.938842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:03.235 [2024-10-14 13:49:54.939142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:03.235 [2024-10-14 13:49:54.939761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:03.235 [2024-10-14 13:49:54.939986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 [2024-10-14 13:49:54.993437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 Malloc0 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 [2024-10-14 13:49:55.061667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:03.235 test case1: single bdev can't be used in multiple subsystems 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.235 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.235 [2024-10-14 13:49:55.085382] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:03.235 [2024-10-14 13:49:55.085412] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:03.235 [2024-10-14 13:49:55.085428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.235 request: 00:39:03.235 { 00:39:03.493 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:03.493 "namespace": { 00:39:03.493 "bdev_name": "Malloc0", 00:39:03.493 "no_auto_visible": false 00:39:03.493 }, 00:39:03.493 "method": "nvmf_subsystem_add_ns", 00:39:03.493 "req_id": 1 00:39:03.493 } 00:39:03.493 Got JSON-RPC error response 00:39:03.493 response: 00:39:03.493 { 00:39:03.493 "code": -32602, 00:39:03.493 "message": "Invalid parameters" 00:39:03.493 } 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:03.493 Adding namespace failed - expected result. 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:03.493 test case2: host connect to nvmf target in multiple paths 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:03.493 [2024-10-14 13:49:55.093501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:03.493 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:03.761 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:03.761 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:03.761 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:03.761 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:03.761 13:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:05.661 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:05.661 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:05.661 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:05.918 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:05.918 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:05.918 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:05.918 13:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:05.918 [global] 00:39:05.918 thread=1 00:39:05.918 invalidate=1 00:39:05.918 rw=write 00:39:05.918 time_based=1 00:39:05.918 runtime=1 00:39:05.918 ioengine=libaio 00:39:05.918 direct=1 00:39:05.918 bs=4096 00:39:05.918 iodepth=1 00:39:05.918 norandommap=0 00:39:05.918 numjobs=1 00:39:05.918 00:39:05.918 verify_dump=1 00:39:05.918 verify_backlog=512 00:39:05.918 verify_state_save=0 00:39:05.918 do_verify=1 00:39:05.918 verify=crc32c-intel 00:39:05.918 [job0] 00:39:05.918 filename=/dev/nvme0n1 00:39:05.918 Could not set queue depth (nvme0n1) 00:39:05.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:05.918 fio-3.35 00:39:05.918 Starting 1 thread 00:39:07.291 00:39:07.291 job0: (groupid=0, jobs=1): err= 0: pid=441739: Mon Oct 14 13:49:58 2024 00:39:07.291 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:39:07.291 slat (nsec): min=6981, max=35423, avg=23442.26, stdev=10341.58 00:39:07.291 clat (usec): min=40353, max=41072, avg=40942.94, stdev=134.62 00:39:07.291 lat (usec): min=40360, max=41091, avg=40966.39, stdev=136.56 00:39:07.291 clat percentiles (usec): 00:39:07.291 | 1.00th=[40109], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:07.291 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:07.291 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:07.291 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:07.291 | 99.99th=[41157] 00:39:07.291 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:39:07.291 slat (nsec): min=7098, max=28360, avg=8276.51, stdev=2460.08 00:39:07.291 clat (usec): min=162, max=389, avg=175.95, stdev=14.02 00:39:07.291 lat (usec): min=170, max=397, avg=184.22, stdev=14.54 00:39:07.291 clat percentiles (usec): 00:39:07.291 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 167], 20.00th=[ 169], 00:39:07.291 | 30.00th=[ 172], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:39:07.291 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:39:07.291 | 99.00th=[ 202], 99.50th=[ 273], 99.90th=[ 392], 99.95th=[ 392], 00:39:07.291 | 99.99th=[ 392] 00:39:07.291 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:07.291 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:07.291 lat (usec) : 250=95.14%, 500=0.56% 00:39:07.291 lat (msec) : 50=4.30% 00:39:07.291 cpu : usr=0.10%, sys=0.68%, ctx=535, majf=0, minf=1 00:39:07.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:07.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:07.291 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:07.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:07.291 00:39:07.291 Run status group 0 (all jobs): 00:39:07.291 READ: bw=88.6KiB/s (90.8kB/s), 88.6KiB/s-88.6KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1038-1038msec 00:39:07.291 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:39:07.291 00:39:07.291 Disk stats (read/write): 00:39:07.291 nvme0n1: ios=69/512, merge=0/0, ticks=808/89, in_queue=897, util=91.68% 00:39:07.291 13:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:07.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:07.291 13:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:07.291 13:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:07.291 13:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:07.291 13:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:07.291 rmmod nvme_tcp 00:39:07.291 rmmod nvme_fabrics 00:39:07.291 rmmod nvme_keyring 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 441303 ']' 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 441303 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 441303 ']' 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 441303 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441303 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441303' 00:39:07.291 killing process with pid 441303 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 441303 00:39:07.291 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 441303 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:39:07.550 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.551 13:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.087 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:10.087 00:39:10.087 real 0m8.987s 00:39:10.087 user 0m16.805s 00:39:10.087 sys 0m3.250s 00:39:10.087 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:10.088 ************************************ 00:39:10.088 END TEST nvmf_nmic 00:39:10.088 ************************************ 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:10.088 ************************************ 00:39:10.088 START TEST nvmf_fio_target 00:39:10.088 ************************************ 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:10.088 * Looking for test storage... 00:39:10.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.088 --rc genhtml_branch_coverage=1 00:39:10.088 --rc genhtml_function_coverage=1 00:39:10.088 --rc genhtml_legend=1 00:39:10.088 --rc geninfo_all_blocks=1 00:39:10.088 --rc geninfo_unexecuted_blocks=1 00:39:10.088 00:39:10.088 ' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.088 --rc genhtml_branch_coverage=1 00:39:10.088 --rc genhtml_function_coverage=1 00:39:10.088 --rc genhtml_legend=1 00:39:10.088 --rc geninfo_all_blocks=1 00:39:10.088 --rc geninfo_unexecuted_blocks=1 00:39:10.088 00:39:10.088 ' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.088 --rc genhtml_branch_coverage=1 00:39:10.088 --rc genhtml_function_coverage=1 00:39:10.088 --rc genhtml_legend=1 00:39:10.088 --rc geninfo_all_blocks=1 00:39:10.088 --rc geninfo_unexecuted_blocks=1 00:39:10.088 00:39:10.088 ' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.088 --rc genhtml_branch_coverage=1 00:39:10.088 --rc genhtml_function_coverage=1 00:39:10.088 --rc genhtml_legend=1 00:39:10.088 --rc geninfo_all_blocks=1 00:39:10.088 --rc geninfo_unexecuted_blocks=1 00:39:10.088 00:39:10.088 ' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.088 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:10.089 13:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:11.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:11.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:11.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:11.989 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.989 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:39:11.990 00:39:11.990 --- 10.0.0.2 ping statistics --- 00:39:11.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.990 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:39:11.990 00:39:11.990 --- 10.0.0.1 ping statistics --- 00:39:11.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.990 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:11.990 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=443823 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 443823 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 443823 ']' 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:12.248 13:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:12.248 [2024-10-14 13:50:03.905582] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:12.248 [2024-10-14 13:50:03.906758] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:12.248 [2024-10-14 13:50:03.906817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:12.248 [2024-10-14 13:50:03.979335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:12.248 [2024-10-14 13:50:04.025694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:12.248 [2024-10-14 13:50:04.025746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:12.248 [2024-10-14 13:50:04.025759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:12.248 [2024-10-14 13:50:04.025770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:12.248 [2024-10-14 13:50:04.025781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:12.248 [2024-10-14 13:50:04.027279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.248 [2024-10-14 13:50:04.027341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:12.248 [2024-10-14 13:50:04.027415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:12.248 [2024-10-14 13:50:04.027418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.507 [2024-10-14 13:50:04.111417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:12.507 [2024-10-14 13:50:04.111616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:12.507 [2024-10-14 13:50:04.111923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:12.507 [2024-10-14 13:50:04.112603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:12.507 [2024-10-14 13:50:04.112810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.507 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:12.765 [2024-10-14 13:50:04.444103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.765 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:13.024 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:13.024 13:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:13.283 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:13.283 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:13.853 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:13.853 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:14.112 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:14.112 13:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:14.370 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:14.628 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:14.628 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:14.886 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:14.886 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:15.144 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:15.144 13:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:15.402 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:15.660 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:15.660 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:15.918 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:15.918 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:16.176 13:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:16.434 [2024-10-14 13:50:08.204279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.434 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:16.691 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:16.949 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:17.207 13:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:19.106 13:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:19.364 [global] 00:39:19.364 thread=1 00:39:19.364 invalidate=1 00:39:19.364 rw=write 00:39:19.364 time_based=1 00:39:19.364 runtime=1 00:39:19.364 ioengine=libaio 00:39:19.364 direct=1 00:39:19.364 bs=4096 00:39:19.364 iodepth=1 00:39:19.364 norandommap=0 00:39:19.364 numjobs=1 00:39:19.364 00:39:19.364 verify_dump=1 00:39:19.364 verify_backlog=512 00:39:19.364 verify_state_save=0 00:39:19.364 do_verify=1 00:39:19.364 verify=crc32c-intel 00:39:19.364 [job0] 00:39:19.364 filename=/dev/nvme0n1 00:39:19.364 [job1] 00:39:19.364 filename=/dev/nvme0n2 00:39:19.364 [job2] 00:39:19.364 filename=/dev/nvme0n3 00:39:19.364 [job3] 00:39:19.364 filename=/dev/nvme0n4 00:39:19.364 Could not set queue depth (nvme0n1) 00:39:19.364 Could not set queue depth (nvme0n2) 00:39:19.364 Could not set queue depth (nvme0n3) 00:39:19.364 Could not set queue depth (nvme0n4) 00:39:19.364 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:19.364 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:19.364 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:19.364 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:19.364 fio-3.35 00:39:19.364 Starting 4 threads 00:39:20.736 00:39:20.736 job0: (groupid=0, jobs=1): err= 0: pid=444873: Mon Oct 14 13:50:12 2024 00:39:20.736 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:20.736 slat (nsec): min=5433, max=46418, avg=9908.64, stdev=5021.67 00:39:20.736 clat (usec): min=211, max=364, avg=241.24, stdev=17.16 00:39:20.736 lat (usec): min=217, max=370, avg=251.15, stdev=19.36 00:39:20.736 clat percentiles (usec): 00:39:20.736 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:39:20.736 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:39:20.736 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 273], 00:39:20.736 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 330], 99.95th=[ 355], 00:39:20.736 | 99.99th=[ 363] 00:39:20.736 write: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec); 0 zone resets 00:39:20.736 slat (nsec): min=6562, max=64741, avg=12306.53, stdev=6310.18 00:39:20.736 clat (usec): min=137, max=401, avg=184.03, stdev=33.42 00:39:20.736 lat (usec): min=148, max=431, avg=196.33, stdev=32.96 00:39:20.736 clat percentiles (usec): 00:39:20.736 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:39:20.736 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:39:20.736 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 243], 95.00th=[ 253], 00:39:20.736 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 379], 99.95th=[ 392], 00:39:20.736 | 99.99th=[ 400] 00:39:20.736 bw ( KiB/s): min= 9368, max= 9368, per=54.04%, avg=9368.00, stdev= 0.00, samples=1 00:39:20.736 iops : min= 2342, max= 2342, avg=2342.00, stdev= 0.00, samples=1 00:39:20.736 lat (usec) : 250=83.15%, 500=16.85% 00:39:20.736 cpu : usr=3.70%, sys=6.50%, ctx=4453, majf=0, minf=1 00:39:20.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 issued rwts: total=2048,2403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.737 job1: (groupid=0, jobs=1): err= 0: pid=444875: Mon Oct 14 13:50:12 2024 00:39:20.737 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:39:20.737 slat (nsec): min=7161, max=18046, avg=15117.09, stdev=2827.36 00:39:20.737 clat (usec): min=22063, max=41043, avg=40090.11, stdev=4028.46 00:39:20.737 lat (usec): min=22080, max=41060, avg=40105.23, stdev=4028.08 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[22152], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:20.737 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:20.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:20.737 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:20.737 | 99.99th=[41157] 00:39:20.737 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:39:20.737 slat (nsec): min=7805, max=40306, avg=9012.42, stdev=2126.97 00:39:20.737 clat (usec): min=154, max=418, avg=236.88, stdev=28.21 00:39:20.737 lat (usec): min=162, max=428, avg=245.89, stdev=28.38 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 239], 00:39:20.737 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 245], 00:39:20.737 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 262], 00:39:20.737 | 99.00th=[ 289], 99.50th=[ 343], 99.90th=[ 420], 99.95th=[ 420], 00:39:20.737 | 99.99th=[ 420] 00:39:20.737 bw ( KiB/s): min= 4096, max= 4096, per=23.63%, avg=4096.00, stdev= 0.00, samples=1 00:39:20.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:20.737 lat (usec) : 250=83.52%, 500=12.36% 00:39:20.737 lat (msec) : 50=4.12% 00:39:20.737 cpu : usr=0.00%, sys=0.99%, ctx=535, majf=0, minf=1 00:39:20.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.737 job2: (groupid=0, jobs=1): err= 0: pid=444877: Mon Oct 14 13:50:12 2024 00:39:20.737 read: IOPS=793, BW=3172KiB/s (3248kB/s)(3188KiB/1005msec) 00:39:20.737 slat (nsec): min=4379, max=19712, avg=5520.19, stdev=2311.88 00:39:20.737 clat (usec): min=204, max=41016, avg=1002.35, stdev=5541.36 00:39:20.737 lat (usec): min=209, max=41029, avg=1007.87, stdev=5542.96 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:39:20.737 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:39:20.737 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 251], 00:39:20.737 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:20.737 | 99.99th=[41157] 00:39:20.737 write: IOPS=1018, BW=4076KiB/s (4173kB/s)(4096KiB/1005msec); 0 zone resets 00:39:20.737 slat (nsec): min=5772, max=43190, avg=6987.47, stdev=2335.49 00:39:20.737 clat (usec): min=148, max=1062, avg=186.53, stdev=36.62 00:39:20.737 lat (usec): min=154, max=1069, avg=193.52, stdev=36.91 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:39:20.737 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:39:20.737 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 225], 00:39:20.737 | 99.00th=[ 241], 99.50th=[ 310], 99.90th=[ 379], 99.95th=[ 1057], 00:39:20.737 | 99.99th=[ 1057] 00:39:20.737 bw ( KiB/s): min= 8192, max= 8192, per=47.25%, avg=8192.00, stdev= 0.00, samples=1 00:39:20.737 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:20.737 lat (usec) : 250=97.20%, 500=1.81% 00:39:20.737 lat (msec) : 2=0.11%, 10=0.05%, 50=0.82% 00:39:20.737 cpu : usr=0.70%, sys=1.00%, ctx=1821, majf=0, minf=1 00:39:20.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 issued rwts: total=797,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.737 job3: (groupid=0, jobs=1): err= 0: pid=444878: Mon Oct 14 13:50:12 2024 00:39:20.737 read: IOPS=118, BW=475KiB/s (487kB/s)(488KiB/1027msec) 00:39:20.737 slat (nsec): min=7004, max=34802, avg=10685.65, stdev=4688.30 00:39:20.737 clat (usec): min=242, max=41034, avg=7612.14, stdev=15700.10 00:39:20.737 lat (usec): min=249, max=41049, avg=7622.83, stdev=15702.90 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 260], 00:39:20.737 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:39:20.737 | 70.00th=[ 285], 80.00th=[ 506], 90.00th=[41157], 95.00th=[41157], 00:39:20.737 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:20.737 | 99.99th=[41157] 00:39:20.737 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:39:20.737 slat (nsec): min=6720, max=25159, avg=7696.59, stdev=1048.50 00:39:20.737 clat (usec): min=144, max=270, avg=178.23, stdev=20.90 00:39:20.737 lat (usec): min=152, max=292, avg=185.93, stdev=21.08 00:39:20.737 clat percentiles (usec): 00:39:20.737 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:39:20.737 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:39:20.737 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 229], 00:39:20.737 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 269], 99.95th=[ 269], 00:39:20.737 | 99.99th=[ 269] 00:39:20.737 bw ( KiB/s): min= 4096, max= 4096, per=23.63%, avg=4096.00, stdev= 0.00, samples=1 00:39:20.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:20.737 lat (usec) : 250=80.44%, 500=15.62%, 750=0.47% 00:39:20.737 lat (msec) : 50=3.47% 00:39:20.737 cpu : usr=0.29%, sys=0.39%, ctx=635, majf=0, minf=1 00:39:20.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.737 issued rwts: total=122,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.737 00:39:20.737 Run status group 0 (all jobs): 00:39:20.737 READ: bw=11.4MiB/s (11.9MB/s), 87.1KiB/s-8184KiB/s (89.2kB/s-8380kB/s), io=11.7MiB (12.2MB), run=1001-1027msec 00:39:20.737 WRITE: bw=16.9MiB/s (17.8MB/s), 1994KiB/s-9602KiB/s (2042kB/s-9833kB/s), io=17.4MiB (18.2MB), run=1001-1027msec 00:39:20.737 00:39:20.737 Disk stats (read/write): 00:39:20.737 nvme0n1: ios=1761/2048, merge=0/0, ticks=1312/345, in_queue=1657, util=97.80% 00:39:20.737 nvme0n2: ios=42/512, merge=0/0, ticks=1703/121, in_queue=1824, util=97.86% 00:39:20.737 nvme0n3: ios=793/1024, merge=0/0, ticks=624/184, in_queue=808, util=88.90% 00:39:20.737 nvme0n4: ios=140/512, merge=0/0, ticks=1637/89, in_queue=1726, util=97.99% 00:39:20.738 13:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:20.738 [global] 00:39:20.738 thread=1 00:39:20.738 invalidate=1 00:39:20.738 rw=randwrite 00:39:20.738 time_based=1 00:39:20.738 runtime=1 00:39:20.738 ioengine=libaio 00:39:20.738 direct=1 00:39:20.738 bs=4096 00:39:20.738 iodepth=1 00:39:20.738 norandommap=0 00:39:20.738 numjobs=1 00:39:20.738 00:39:20.738 verify_dump=1 00:39:20.738 verify_backlog=512 00:39:20.738 verify_state_save=0 00:39:20.738 do_verify=1 00:39:20.738 verify=crc32c-intel 00:39:20.738 [job0] 00:39:20.738 filename=/dev/nvme0n1 00:39:20.738 [job1] 00:39:20.738 filename=/dev/nvme0n2 00:39:20.738 [job2] 00:39:20.738 filename=/dev/nvme0n3 00:39:20.738 [job3] 00:39:20.738 filename=/dev/nvme0n4 00:39:20.738 Could not set queue depth (nvme0n1) 00:39:20.738 Could not set queue depth (nvme0n2) 00:39:20.738 Could not set queue depth (nvme0n3) 00:39:20.738 Could not set queue depth (nvme0n4) 00:39:20.995 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:20.995 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:20.995 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:20.996 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:20.996 fio-3.35 00:39:20.996 Starting 4 threads 00:39:22.369 00:39:22.369 job0: (groupid=0, jobs=1): err= 0: pid=445107: Mon Oct 14 13:50:13 2024 00:39:22.369 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:39:22.369 slat (nsec): min=8977, max=14808, avg=14152.43, stdev=1328.32 00:39:22.369 clat (usec): min=40663, max=42013, avg=41824.69, stdev=397.39 00:39:22.369 lat (usec): min=40672, max=42028, avg=41838.84, stdev=398.55 00:39:22.369 clat percentiles (usec): 00:39:22.369 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[42206], 00:39:22.369 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:22.369 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:22.369 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:22.369 | 99.99th=[42206] 00:39:22.369 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:39:22.369 slat (nsec): min=6501, max=48512, avg=11477.75, stdev=3608.32 00:39:22.369 clat (usec): min=147, max=1189, avg=248.22, stdev=59.78 00:39:22.369 lat (usec): min=154, max=1232, avg=259.70, stdev=60.96 00:39:22.369 clat percentiles (usec): 00:39:22.369 | 1.00th=[ 174], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 227], 00:39:22.369 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:39:22.369 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 302], 00:39:22.369 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 1188], 99.95th=[ 1188], 00:39:22.369 | 99.99th=[ 1188] 00:39:22.369 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:39:22.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:22.369 lat (usec) : 250=67.54%, 500=28.14%, 1000=0.19% 00:39:22.369 lat (msec) : 2=0.19%, 50=3.94% 00:39:22.369 cpu : usr=0.10%, sys=0.69%, ctx=535, majf=0, minf=1 00:39:22.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.369 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.369 job1: (groupid=0, jobs=1): err= 0: pid=445108: Mon Oct 14 13:50:13 2024 00:39:22.369 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:39:22.369 slat (nsec): min=8017, max=27532, avg=14281.23, stdev=3286.28 00:39:22.369 clat (usec): min=40914, max=41051, avg=40981.88, stdev=29.43 00:39:22.369 lat (usec): min=40927, max=41065, avg=40996.16, stdev=29.72 00:39:22.369 clat percentiles (usec): 00:39:22.369 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:22.369 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:22.369 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:22.369 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:22.369 | 99.99th=[41157] 00:39:22.369 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:39:22.369 slat (nsec): min=7665, max=28343, avg=9554.48, stdev=2482.18 00:39:22.369 clat (usec): min=155, max=495, avg=216.45, stdev=31.11 00:39:22.369 lat (usec): min=164, max=504, avg=226.00, stdev=31.37 00:39:22.369 clat percentiles (usec): 00:39:22.369 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:39:22.369 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 221], 60.00th=[ 233], 00:39:22.369 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 255], 00:39:22.369 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 494], 99.95th=[ 494], 00:39:22.369 | 99.99th=[ 494] 00:39:22.369 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:39:22.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:22.369 lat (usec) : 250=87.27%, 500=8.61% 00:39:22.369 lat (msec) : 50=4.12% 00:39:22.370 cpu : usr=0.39%, sys=0.59%, ctx=536, majf=0, minf=1 00:39:22.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.370 job2: (groupid=0, jobs=1): err= 0: pid=445109: Mon Oct 14 13:50:13 2024 00:39:22.370 read: IOPS=516, BW=2064KiB/s (2114kB/s)(2116KiB/1025msec) 00:39:22.370 slat (nsec): min=4411, max=25018, avg=6741.29, stdev=3149.04 00:39:22.370 clat (usec): min=202, max=41029, avg=1542.82, stdev=7189.94 00:39:22.370 lat (usec): min=208, max=41043, avg=1549.56, stdev=7191.14 00:39:22.370 clat percentiles (usec): 00:39:22.370 | 1.00th=[ 206], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:39:22.370 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:39:22.370 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 293], 00:39:22.370 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:22.370 | 99.99th=[41157] 00:39:22.370 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:39:22.370 slat (nsec): min=5534, max=30772, avg=8000.66, stdev=2791.91 00:39:22.370 clat (usec): min=145, max=457, avg=188.50, stdev=43.86 00:39:22.370 lat (usec): min=151, max=464, avg=196.50, stdev=44.69 00:39:22.370 clat percentiles (usec): 00:39:22.370 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:39:22.370 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 178], 60.00th=[ 186], 00:39:22.370 | 70.00th=[ 196], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 269], 00:39:22.370 | 99.00th=[ 379], 99.50th=[ 379], 99.90th=[ 392], 99.95th=[ 457], 00:39:22.370 | 99.99th=[ 457] 00:39:22.370 bw ( KiB/s): min= 8192, max= 8192, per=83.28%, avg=8192.00, stdev= 0.00, samples=1 00:39:22.370 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:22.370 lat (usec) : 250=87.83%, 500=11.08% 00:39:22.370 lat (msec) : 50=1.09% 00:39:22.370 cpu : usr=0.49%, sys=1.46%, ctx=1553, majf=0, minf=1 00:39:22.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.370 job3: (groupid=0, jobs=1): err= 0: pid=445110: Mon Oct 14 13:50:13 2024 00:39:22.370 read: IOPS=67, BW=269KiB/s (275kB/s)(280KiB/1041msec) 00:39:22.370 slat (nsec): min=6418, max=14842, avg=9090.64, stdev=3175.43 00:39:22.370 clat (usec): min=229, max=42056, avg=13300.71, stdev=19408.38 00:39:22.370 lat (usec): min=237, max=42064, avg=13309.80, stdev=19411.31 00:39:22.370 clat percentiles (usec): 00:39:22.370 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:39:22.370 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:39:22.370 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:22.370 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:22.370 | 99.99th=[42206] 00:39:22.370 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:39:22.370 slat (nsec): min=7766, max=24685, avg=9479.26, stdev=2595.06 00:39:22.370 clat (usec): min=161, max=402, avg=199.16, stdev=25.85 00:39:22.370 lat (usec): min=170, max=414, avg=208.64, stdev=26.68 00:39:22.370 clat percentiles (usec): 00:39:22.370 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:39:22.370 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:39:22.370 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 241], 00:39:22.370 | 99.00th=[ 281], 99.50th=[ 363], 99.90th=[ 404], 99.95th=[ 404], 00:39:22.370 | 99.99th=[ 404] 00:39:22.370 bw ( KiB/s): min= 4096, max= 4096, per=41.64%, avg=4096.00, stdev= 0.00, samples=1 00:39:22.370 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:22.370 lat (usec) : 250=89.69%, 500=6.53% 00:39:22.370 lat (msec) : 50=3.78% 00:39:22.370 cpu : usr=0.38%, sys=0.67%, ctx=583, majf=0, minf=1 00:39:22.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.370 issued rwts: total=70,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.370 00:39:22.370 Run status group 0 (all jobs): 00:39:22.370 READ: bw=2467KiB/s (2526kB/s), 82.8KiB/s-2064KiB/s (84.8kB/s-2114kB/s), io=2568KiB (2630kB), run=1014-1041msec 00:39:22.370 WRITE: bw=9837KiB/s (10.1MB/s), 1967KiB/s-3996KiB/s (2015kB/s-4092kB/s), io=10.0MiB (10.5MB), run=1014-1041msec 00:39:22.370 00:39:22.370 Disk stats (read/write): 00:39:22.370 nvme0n1: ios=45/512, merge=0/0, ticks=1630/124, in_queue=1754, util=94.59% 00:39:22.370 nvme0n2: ios=41/512, merge=0/0, ticks=1641/112, in_queue=1753, util=95.43% 00:39:22.370 nvme0n3: ios=580/1024, merge=0/0, ticks=632/194, in_queue=826, util=90.32% 00:39:22.370 nvme0n4: ios=124/512, merge=0/0, ticks=949/97, in_queue=1046, util=98.95% 00:39:22.370 13:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:22.370 [global] 00:39:22.370 thread=1 00:39:22.370 invalidate=1 00:39:22.370 rw=write 00:39:22.370 time_based=1 00:39:22.370 runtime=1 00:39:22.370 ioengine=libaio 00:39:22.370 direct=1 00:39:22.370 bs=4096 00:39:22.370 iodepth=128 00:39:22.370 norandommap=0 00:39:22.370 numjobs=1 00:39:22.370 00:39:22.370 verify_dump=1 00:39:22.370 verify_backlog=512 00:39:22.370 verify_state_save=0 00:39:22.370 do_verify=1 00:39:22.370 verify=crc32c-intel 00:39:22.370 [job0] 00:39:22.370 filename=/dev/nvme0n1 00:39:22.370 [job1] 00:39:22.370 filename=/dev/nvme0n2 00:39:22.370 [job2] 00:39:22.370 filename=/dev/nvme0n3 00:39:22.370 [job3] 00:39:22.370 filename=/dev/nvme0n4 00:39:22.370 Could not set queue depth (nvme0n1) 00:39:22.370 Could not set queue depth (nvme0n2) 00:39:22.370 Could not set queue depth (nvme0n3) 00:39:22.370 Could not set queue depth (nvme0n4) 00:39:22.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:22.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:22.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:22.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:22.370 fio-3.35 00:39:22.370 Starting 4 threads 00:39:23.746 00:39:23.746 job0: (groupid=0, jobs=1): err= 0: pid=445352: Mon Oct 14 13:50:15 2024 00:39:23.746 read: IOPS=3133, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:39:23.746 slat (usec): min=2, max=11512, avg=146.42, stdev=730.69 00:39:23.746 clat (usec): min=3198, max=39879, avg=18950.09, stdev=8998.84 00:39:23.746 lat (usec): min=7191, max=44664, avg=19096.51, stdev=9075.50 00:39:23.746 clat percentiles (usec): 00:39:23.746 | 1.00th=[ 7439], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:39:23.746 | 30.00th=[10552], 40.00th=[12125], 50.00th=[15795], 60.00th=[22938], 00:39:23.746 | 70.00th=[24511], 80.00th=[26608], 90.00th=[32900], 95.00th=[34866], 00:39:23.746 | 99.00th=[38011], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:39:23.746 | 99.99th=[40109] 00:39:23.746 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:39:23.746 slat (usec): min=4, max=10366, avg=138.92, stdev=621.15 00:39:23.746 clat (usec): min=7870, max=42772, avg=18821.85, stdev=8849.56 00:39:23.746 lat (usec): min=7886, max=44230, avg=18960.77, stdev=8925.24 00:39:23.746 clat percentiles (usec): 00:39:23.746 | 1.00th=[ 8160], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10814], 00:39:23.746 | 30.00th=[11076], 40.00th=[11731], 50.00th=[14877], 60.00th=[22414], 00:39:23.746 | 70.00th=[25035], 80.00th=[27919], 90.00th=[31065], 95.00th=[33817], 00:39:23.746 | 99.00th=[40109], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:39:23.746 | 99.99th=[42730] 00:39:23.746 bw ( KiB/s): min=10776, max=17472, per=23.46%, avg=14124.00, stdev=4734.79, samples=2 00:39:23.746 iops : min= 2694, max= 4368, avg=3531.00, stdev=1183.70, samples=2 00:39:23.746 lat (msec) : 4=0.01%, 10=9.63%, 20=45.30%, 50=45.05% 00:39:23.746 cpu : usr=4.89%, sys=8.97%, ctx=504, majf=0, minf=1 00:39:23.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:23.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:23.746 issued rwts: total=3146,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:23.746 job1: (groupid=0, jobs=1): err= 0: pid=445364: Mon Oct 14 13:50:15 2024 00:39:23.746 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:39:23.746 slat (usec): min=2, max=18958, avg=113.91, stdev=881.25 00:39:23.746 clat (usec): min=1994, max=105899, avg=14589.29, stdev=11896.72 00:39:23.746 lat (msec): min=2, max=105, avg=14.70, stdev=11.97 00:39:23.746 clat percentiles (msec): 00:39:23.746 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:39:23.746 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:39:23.746 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 20], 95.00th=[ 42], 00:39:23.746 | 99.00th=[ 70], 99.50th=[ 70], 99.90th=[ 70], 99.95th=[ 106], 00:39:23.746 | 99.99th=[ 106] 00:39:23.747 write: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1002msec); 0 zone resets 00:39:23.747 slat (usec): min=3, max=18231, avg=90.89, stdev=721.82 00:39:23.747 clat (usec): min=283, max=38625, avg=12211.06, stdev=5103.13 00:39:23.747 lat (usec): min=1576, max=38668, avg=12301.95, stdev=5139.68 00:39:23.747 clat percentiles (usec): 00:39:23.747 | 1.00th=[ 3032], 5.00th=[ 5604], 10.00th=[ 7832], 20.00th=[ 9634], 00:39:23.747 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:39:23.747 | 70.00th=[11863], 80.00th=[12649], 90.00th=[20317], 95.00th=[21103], 00:39:23.747 | 99.00th=[29754], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:39:23.747 | 99.99th=[38536] 00:39:23.747 bw ( KiB/s): min=16376, max=20544, per=30.66%, avg=18460.00, stdev=2947.22, samples=2 00:39:23.747 iops : min= 4094, max= 5136, avg=4615.00, stdev=736.81, samples=2 00:39:23.747 lat (usec) : 500=0.01% 00:39:23.747 lat (msec) : 2=0.16%, 4=1.04%, 10=18.37%, 20=70.89%, 50=7.47% 00:39:23.747 lat (msec) : 100=2.00%, 250=0.04% 00:39:23.747 cpu : usr=3.50%, sys=6.09%, ctx=419, majf=0, minf=1 00:39:23.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:23.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:23.747 issued rwts: total=4608,4720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:23.747 job2: (groupid=0, jobs=1): err= 0: pid=445398: Mon Oct 14 13:50:15 2024 00:39:23.747 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:39:23.747 slat (usec): min=3, max=10347, avg=168.16, stdev=845.28 00:39:23.747 clat (usec): min=10388, max=65207, avg=21403.69, stdev=8329.59 00:39:23.747 lat (usec): min=10402, max=65251, avg=21571.85, stdev=8406.23 00:39:23.747 clat percentiles (usec): 00:39:23.747 | 1.00th=[12911], 5.00th=[13304], 10.00th=[13698], 20.00th=[14746], 00:39:23.747 | 30.00th=[17695], 40.00th=[19268], 50.00th=[20579], 60.00th=[21365], 00:39:23.747 | 70.00th=[22938], 80.00th=[23462], 90.00th=[26346], 95.00th=[40633], 00:39:23.747 | 99.00th=[54789], 99.50th=[57410], 99.90th=[58983], 99.95th=[62129], 00:39:23.747 | 99.99th=[65274] 00:39:23.747 write: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1004msec); 0 zone resets 00:39:23.747 slat (usec): min=4, max=53367, avg=198.23, stdev=1777.33 00:39:23.747 clat (msec): min=3, max=200, avg=19.45, stdev=11.53 00:39:23.747 lat (msec): min=7, max=200, avg=19.65, stdev=12.03 00:39:23.747 clat percentiles (msec): 00:39:23.747 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:39:23.747 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:39:23.747 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 28], 00:39:23.747 | 99.00th=[ 68], 99.50th=[ 96], 99.90th=[ 201], 99.95th=[ 201], 00:39:23.747 | 99.99th=[ 201] 00:39:23.747 bw ( KiB/s): min= 8848, max=11848, per=17.19%, avg=10348.00, stdev=2121.32, samples=2 00:39:23.747 iops : min= 2212, max= 2962, avg=2587.00, stdev=530.33, samples=2 00:39:23.747 lat (msec) : 4=0.02%, 10=0.85%, 20=56.92%, 50=40.08%, 100=1.97% 00:39:23.747 lat (msec) : 250=0.15% 00:39:23.747 cpu : usr=3.99%, sys=6.58%, ctx=244, majf=0, minf=1 00:39:23.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:23.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:23.747 issued rwts: total=2560,2714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:23.747 job3: (groupid=0, jobs=1): err= 0: pid=445406: Mon Oct 14 13:50:15 2024 00:39:23.747 read: IOPS=4049, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1003msec) 00:39:23.747 slat (usec): min=2, max=14596, avg=112.60, stdev=689.14 00:39:23.747 clat (usec): min=760, max=32966, avg=16022.62, stdev=4633.12 00:39:23.747 lat (usec): min=3941, max=32974, avg=16135.22, stdev=4652.78 00:39:23.747 clat percentiles (usec): 00:39:23.747 | 1.00th=[ 4752], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[12649], 00:39:23.747 | 30.00th=[13435], 40.00th=[14091], 50.00th=[15401], 60.00th=[16581], 00:39:23.747 | 70.00th=[17695], 80.00th=[19006], 90.00th=[22676], 95.00th=[24511], 00:39:23.747 | 99.00th=[29230], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:39:23.747 | 99.99th=[32900] 00:39:23.747 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:39:23.747 slat (usec): min=3, max=16158, avg=115.30, stdev=828.08 00:39:23.747 clat (usec): min=789, max=36911, avg=15193.13, stdev=3402.46 00:39:23.747 lat (usec): min=801, max=36952, avg=15308.43, stdev=3488.05 00:39:23.747 clat percentiles (usec): 00:39:23.747 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[12649], 00:39:23.747 | 30.00th=[13173], 40.00th=[13698], 50.00th=[15139], 60.00th=[15926], 00:39:23.747 | 70.00th=[16712], 80.00th=[17957], 90.00th=[20055], 95.00th=[20841], 00:39:23.747 | 99.00th=[23725], 99.50th=[24249], 99.90th=[27395], 99.95th=[29754], 00:39:23.747 | 99.99th=[36963] 00:39:23.747 bw ( KiB/s): min=16384, max=16416, per=27.24%, avg=16400.00, stdev=22.63, samples=2 00:39:23.747 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:39:23.747 lat (usec) : 1000=0.06% 00:39:23.747 lat (msec) : 4=0.12%, 10=6.20%, 20=80.41%, 50=13.20% 00:39:23.747 cpu : usr=4.39%, sys=7.49%, ctx=302, majf=0, minf=2 00:39:23.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:23.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:23.747 issued rwts: total=4062,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:23.747 00:39:23.747 Run status group 0 (all jobs): 00:39:23.747 READ: bw=55.9MiB/s (58.6MB/s), 9.96MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=56.2MiB (58.9MB), run=1002-1004msec 00:39:23.747 WRITE: bw=58.8MiB/s (61.7MB/s), 10.6MiB/s-18.4MiB/s (11.1MB/s-19.3MB/s), io=59.0MiB (61.9MB), run=1002-1004msec 00:39:23.747 00:39:23.747 Disk stats (read/write): 00:39:23.747 nvme0n1: ios=3072/3072, merge=0/0, ticks=18221/15669, in_queue=33890, util=86.87% 00:39:23.747 nvme0n2: ios=3636/4071, merge=0/0, ticks=38784/43188, in_queue=81972, util=97.76% 00:39:23.747 nvme0n3: ios=2089/2206, merge=0/0, ticks=15243/11139, in_queue=26382, util=98.33% 00:39:23.747 nvme0n4: ios=3160/3584, merge=0/0, ticks=32501/33691, in_queue=66192, util=95.57% 00:39:23.747 13:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:23.747 [global] 00:39:23.747 thread=1 00:39:23.747 invalidate=1 00:39:23.747 rw=randwrite 00:39:23.747 time_based=1 00:39:23.747 runtime=1 00:39:23.747 ioengine=libaio 00:39:23.747 direct=1 00:39:23.747 bs=4096 00:39:23.747 iodepth=128 00:39:23.747 norandommap=0 00:39:23.747 numjobs=1 00:39:23.747 00:39:23.747 verify_dump=1 00:39:23.747 verify_backlog=512 00:39:23.747 verify_state_save=0 00:39:23.747 do_verify=1 00:39:23.747 verify=crc32c-intel 00:39:23.747 [job0] 00:39:23.747 filename=/dev/nvme0n1 00:39:23.747 [job1] 00:39:23.747 filename=/dev/nvme0n2 00:39:23.747 [job2] 00:39:23.747 filename=/dev/nvme0n3 00:39:23.747 [job3] 00:39:23.747 filename=/dev/nvme0n4 00:39:23.747 Could not set queue depth (nvme0n1) 00:39:23.748 Could not set queue depth (nvme0n2) 00:39:23.748 Could not set queue depth (nvme0n3) 00:39:23.748 Could not set queue depth (nvme0n4) 00:39:23.748 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:23.748 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:23.748 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:23.748 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:23.748 fio-3.35 00:39:23.748 Starting 4 threads 00:39:25.122 00:39:25.122 job0: (groupid=0, jobs=1): err= 0: pid=445681: Mon Oct 14 13:50:16 2024 00:39:25.122 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:39:25.122 slat (usec): min=2, max=11223, avg=102.14, stdev=581.74 00:39:25.122 clat (usec): min=7285, max=48966, avg=13555.43, stdev=6285.52 00:39:25.122 lat (usec): min=7302, max=48970, avg=13657.57, stdev=6326.57 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10814], 00:39:25.123 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:39:25.123 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15664], 95.00th=[25560], 00:39:25.123 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:39:25.123 | 99.99th=[49021] 00:39:25.123 write: IOPS=3676, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1005msec); 0 zone resets 00:39:25.123 slat (usec): min=3, max=27962, avg=162.18, stdev=1018.74 00:39:25.123 clat (usec): min=4589, max=68391, avg=21177.19, stdev=14066.36 00:39:25.123 lat (usec): min=5225, max=68413, avg=21339.38, stdev=14150.64 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 6652], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[11076], 00:39:25.123 | 30.00th=[11863], 40.00th=[13042], 50.00th=[17171], 60.00th=[20055], 00:39:25.123 | 70.00th=[21890], 80.00th=[28181], 90.00th=[45351], 95.00th=[54264], 00:39:25.123 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:39:25.123 | 99.99th=[68682] 00:39:25.123 bw ( KiB/s): min=14080, max=14648, per=24.75%, avg=14364.00, stdev=401.64, samples=2 00:39:25.123 iops : min= 3520, max= 3662, avg=3591.00, stdev=100.41, samples=2 00:39:25.123 lat (msec) : 10=10.65%, 20=65.83%, 50=20.06%, 100=3.46% 00:39:25.123 cpu : usr=3.49%, sys=6.97%, ctx=403, majf=0, minf=1 00:39:25.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:25.123 issued rwts: total=3584,3695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:25.123 job1: (groupid=0, jobs=1): err= 0: pid=445682: Mon Oct 14 13:50:16 2024 00:39:25.123 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1010msec) 00:39:25.123 slat (usec): min=2, max=21491, avg=126.01, stdev=881.66 00:39:25.123 clat (usec): min=3485, max=44341, avg=16275.51, stdev=5897.64 00:39:25.123 lat (usec): min=6394, max=44344, avg=16401.52, stdev=5964.75 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11338], 00:39:25.123 | 30.00th=[12125], 40.00th=[12780], 50.00th=[14091], 60.00th=[16319], 00:39:25.123 | 70.00th=[19530], 80.00th=[22414], 90.00th=[24773], 95.00th=[27132], 00:39:25.123 | 99.00th=[36439], 99.50th=[36439], 99.90th=[40633], 99.95th=[44303], 00:39:25.123 | 99.99th=[44303] 00:39:25.123 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:39:25.123 slat (usec): min=3, max=12632, avg=149.46, stdev=936.88 00:39:25.123 clat (usec): min=5809, max=53512, avg=19299.71, stdev=10063.39 00:39:25.123 lat (usec): min=5813, max=53538, avg=19449.16, stdev=10148.68 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 6259], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10945], 00:39:25.123 | 30.00th=[11863], 40.00th=[15795], 50.00th=[18744], 60.00th=[20317], 00:39:25.123 | 70.00th=[21365], 80.00th=[23725], 90.00th=[31065], 95.00th=[44303], 00:39:25.123 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:39:25.123 | 99.99th=[53740] 00:39:25.123 bw ( KiB/s): min=12920, max=15752, per=24.70%, avg=14336.00, stdev=2002.53, samples=2 00:39:25.123 iops : min= 3230, max= 3938, avg=3584.00, stdev=500.63, samples=2 00:39:25.123 lat (msec) : 4=0.01%, 10=10.48%, 20=54.33%, 50=33.91%, 100=1.26% 00:39:25.123 cpu : usr=2.18%, sys=3.96%, ctx=327, majf=0, minf=1 00:39:25.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:25.123 issued rwts: total=3546,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:25.123 job2: (groupid=0, jobs=1): err= 0: pid=445683: Mon Oct 14 13:50:16 2024 00:39:25.123 read: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1002msec) 00:39:25.123 slat (usec): min=2, max=13415, avg=124.20, stdev=768.96 00:39:25.123 clat (usec): min=594, max=38403, avg=15176.38, stdev=4764.18 00:39:25.123 lat (usec): min=1928, max=38408, avg=15300.58, stdev=4800.34 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 4490], 5.00th=[10421], 10.00th=[11863], 20.00th=[12125], 00:39:25.123 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:39:25.123 | 70.00th=[15139], 80.00th=[16909], 90.00th=[22676], 95.00th=[25560], 00:39:25.123 | 99.00th=[32113], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:39:25.123 | 99.99th=[38536] 00:39:25.123 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:39:25.123 slat (usec): min=3, max=13495, avg=150.23, stdev=774.84 00:39:25.123 clat (usec): min=755, max=52788, avg=20390.52, stdev=11134.00 00:39:25.123 lat (usec): min=772, max=52793, avg=20540.75, stdev=11213.02 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 3851], 5.00th=[ 6849], 10.00th=[10814], 20.00th=[12649], 00:39:25.123 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15664], 60.00th=[20579], 00:39:25.123 | 70.00th=[22676], 80.00th=[29492], 90.00th=[38536], 95.00th=[45351], 00:39:25.123 | 99.00th=[50594], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:39:25.123 | 99.99th=[52691] 00:39:25.123 bw ( KiB/s): min=12288, max=16384, per=24.70%, avg=14336.00, stdev=2896.31, samples=2 00:39:25.123 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:39:25.123 lat (usec) : 750=0.01%, 1000=0.03% 00:39:25.123 lat (msec) : 2=0.17%, 4=0.74%, 10=5.15%, 20=65.46%, 50=27.84% 00:39:25.123 lat (msec) : 100=0.59% 00:39:25.123 cpu : usr=2.10%, sys=4.20%, ctx=412, majf=0, minf=2 00:39:25.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:25.123 issued rwts: total=3536,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:25.123 job3: (groupid=0, jobs=1): err= 0: pid=445684: Mon Oct 14 13:50:16 2024 00:39:25.123 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:39:25.123 slat (usec): min=2, max=11688, avg=127.29, stdev=811.70 00:39:25.123 clat (usec): min=1183, max=34450, avg=16599.22, stdev=5350.07 00:39:25.123 lat (usec): min=1223, max=34478, avg=16726.51, stdev=5418.80 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 7111], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[12125], 00:39:25.123 | 30.00th=[12780], 40.00th=[13304], 50.00th=[15270], 60.00th=[18482], 00:39:25.123 | 70.00th=[20055], 80.00th=[22414], 90.00th=[23987], 95.00th=[25297], 00:39:25.123 | 99.00th=[28443], 99.50th=[29492], 99.90th=[31327], 99.95th=[33817], 00:39:25.123 | 99.99th=[34341] 00:39:25.123 write: IOPS=3754, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1010msec); 0 zone resets 00:39:25.123 slat (usec): min=2, max=12019, avg=132.94, stdev=861.10 00:39:25.123 clat (usec): min=2106, max=48080, avg=18034.61, stdev=7412.72 00:39:25.123 lat (usec): min=5022, max=48085, avg=18167.55, stdev=7488.44 00:39:25.123 clat percentiles (usec): 00:39:25.123 | 1.00th=[ 6521], 5.00th=[ 9372], 10.00th=[11863], 20.00th=[12649], 00:39:25.123 | 30.00th=[13173], 40.00th=[14353], 50.00th=[16319], 60.00th=[18744], 00:39:25.123 | 70.00th=[20055], 80.00th=[22414], 90.00th=[25035], 95.00th=[32900], 00:39:25.123 | 99.00th=[45876], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:39:25.123 | 99.99th=[47973] 00:39:25.123 bw ( KiB/s): min=12288, max=17024, per=25.25%, avg=14656.00, stdev=3348.86, samples=2 00:39:25.123 iops : min= 3072, max= 4256, avg=3664.00, stdev=837.21, samples=2 00:39:25.123 lat (msec) : 2=0.01%, 4=0.01%, 10=7.17%, 20=63.02%, 50=29.79% 00:39:25.123 cpu : usr=2.18%, sys=3.67%, ctx=305, majf=0, minf=1 00:39:25.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:25.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:25.123 issued rwts: total=3584,3792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:25.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:25.123 00:39:25.123 Run status group 0 (all jobs): 00:39:25.123 READ: bw=55.1MiB/s (57.8MB/s), 13.7MiB/s-13.9MiB/s (14.4MB/s-14.6MB/s), io=55.7MiB (58.4MB), run=1002-1010msec 00:39:25.123 WRITE: bw=56.7MiB/s (59.4MB/s), 13.9MiB/s-14.7MiB/s (14.5MB/s-15.4MB/s), io=57.2MiB (60.0MB), run=1002-1010msec 00:39:25.123 00:39:25.123 Disk stats (read/write): 00:39:25.123 nvme0n1: ios=2584/2975, merge=0/0, ticks=19003/34838, in_queue=53841, util=93.69% 00:39:25.123 nvme0n2: ios=2898/3072, merge=0/0, ticks=22256/30265, in_queue=52521, util=99.09% 00:39:25.123 nvme0n3: ios=3096/3072, merge=0/0, ticks=35292/55060, in_queue=90352, util=99.06% 00:39:25.123 nvme0n4: ios=3123/3118, merge=0/0, ticks=27071/26650, in_queue=53721, util=90.77% 00:39:25.123 13:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:25.123 13:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=445822 00:39:25.123 13:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:25.123 13:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:25.123 [global] 00:39:25.123 thread=1 00:39:25.123 invalidate=1 00:39:25.123 rw=read 00:39:25.123 time_based=1 00:39:25.123 runtime=10 00:39:25.123 ioengine=libaio 00:39:25.123 direct=1 00:39:25.123 bs=4096 00:39:25.123 iodepth=1 00:39:25.123 norandommap=1 00:39:25.123 numjobs=1 00:39:25.123 00:39:25.123 [job0] 00:39:25.123 filename=/dev/nvme0n1 00:39:25.123 [job1] 00:39:25.123 filename=/dev/nvme0n2 00:39:25.123 [job2] 00:39:25.123 filename=/dev/nvme0n3 00:39:25.123 [job3] 00:39:25.123 filename=/dev/nvme0n4 00:39:25.123 Could not set queue depth (nvme0n1) 00:39:25.123 Could not set queue depth (nvme0n2) 00:39:25.123 Could not set queue depth (nvme0n3) 00:39:25.123 Could not set queue depth (nvme0n4) 00:39:25.381 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:25.381 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:25.381 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:25.381 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:25.381 fio-3.35 00:39:25.381 Starting 4 threads 00:39:28.659 13:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:28.659 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:28.659 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46055424, buflen=4096 00:39:28.659 fio: pid=445918, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:28.659 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:28.659 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:28.659 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9302016, buflen=4096 00:39:28.659 fio: pid=445917, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:28.917 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:28.917 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:28.917 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10481664, buflen=4096 00:39:28.917 fio: pid=445915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:29.175 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:29.175 13:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:29.175 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4534272, buflen=4096 00:39:29.175 fio: pid=445916, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:29.175 00:39:29.175 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=445915: Mon Oct 14 13:50:20 2024 00:39:29.175 read: IOPS=727, BW=2910KiB/s (2980kB/s)(10.00MiB/3517msec) 00:39:29.175 slat (usec): min=4, max=7917, avg=16.04, stdev=267.48 00:39:29.175 clat (usec): min=216, max=42023, avg=1345.29, stdev=6580.64 00:39:29.175 lat (usec): min=222, max=48981, avg=1361.33, stdev=6605.65 00:39:29.175 clat percentiles (usec): 00:39:29.175 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 239], 00:39:29.175 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:39:29.175 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 359], 00:39:29.175 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:29.175 | 99.99th=[42206] 00:39:29.175 bw ( KiB/s): min= 96, max= 5864, per=11.46%, avg=2086.67, stdev=2593.76, samples=6 00:39:29.175 iops : min= 24, max= 1466, avg=521.67, stdev=648.44, samples=6 00:39:29.175 lat (usec) : 250=48.91%, 500=48.24%, 750=0.08% 00:39:29.175 lat (msec) : 2=0.08%, 50=2.66% 00:39:29.176 cpu : usr=0.14%, sys=0.97%, ctx=2563, majf=0, minf=2 00:39:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:29.176 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=445916: Mon Oct 14 13:50:20 2024 00:39:29.176 read: IOPS=293, BW=1173KiB/s (1201kB/s)(4428KiB/3776msec) 00:39:29.176 slat (usec): min=4, max=5866, avg=18.54, stdev=227.06 00:39:29.176 clat (usec): min=199, max=48814, avg=3379.79, stdev=10874.94 00:39:29.176 lat (usec): min=209, max=48827, avg=3398.33, stdev=10913.76 00:39:29.176 clat percentiles (usec): 00:39:29.176 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:39:29.176 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:39:29.176 | 70.00th=[ 251], 80.00th=[ 285], 90.00th=[ 453], 95.00th=[41157], 00:39:29.176 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[49021], 00:39:29.176 | 99.99th=[49021] 00:39:29.176 bw ( KiB/s): min= 96, max= 7008, per=6.91%, avg=1257.71, stdev=2575.14, samples=7 00:39:29.176 iops : min= 24, max= 1752, avg=314.43, stdev=643.78, samples=7 00:39:29.176 lat (usec) : 250=69.49%, 500=22.74% 00:39:29.176 lat (msec) : 50=7.67% 00:39:29.176 cpu : usr=0.03%, sys=0.37%, ctx=1112, majf=0, minf=1 00:39:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:29.176 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=445917: Mon Oct 14 13:50:20 2024 00:39:29.176 read: IOPS=702, BW=2808KiB/s (2875kB/s)(9084KiB/3235msec) 00:39:29.176 slat (nsec): min=4711, max=71791, avg=16124.46, stdev=9433.48 00:39:29.176 clat (usec): min=248, max=42057, avg=1394.14, stdev=6433.70 00:39:29.176 lat (usec): min=256, max=42073, avg=1410.26, stdev=6434.13 00:39:29.176 clat percentiles (usec): 00:39:29.176 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:39:29.176 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 330], 60.00th=[ 351], 00:39:29.176 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 469], 00:39:29.176 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:29.176 | 99.99th=[42206] 00:39:29.176 bw ( KiB/s): min= 96, max= 8024, per=16.59%, avg=3020.00, stdev=2914.92, samples=6 00:39:29.176 iops : min= 24, max= 2006, avg=755.00, stdev=728.73, samples=6 00:39:29.176 lat (usec) : 250=0.04%, 500=95.77%, 750=1.50% 00:39:29.176 lat (msec) : 20=0.04%, 50=2.60% 00:39:29.176 cpu : usr=0.65%, sys=1.08%, ctx=2272, majf=0, minf=1 00:39:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:29.176 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=445918: Mon Oct 14 13:50:20 2024 00:39:29.176 read: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(43.9MiB/2912msec) 00:39:29.176 slat (nsec): min=4198, max=66618, avg=10233.44, stdev=7409.76 00:39:29.176 clat (usec): min=201, max=1969, avg=244.33, stdev=45.48 00:39:29.176 lat (usec): min=207, max=1983, avg=254.56, stdev=48.68 00:39:29.176 clat percentiles (usec): 00:39:29.176 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:39:29.176 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:39:29.176 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 334], 00:39:29.176 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 502], 99.95th=[ 553], 00:39:29.176 | 99.99th=[ 1123] 00:39:29.176 bw ( KiB/s): min=13672, max=16752, per=86.87%, avg=15811.20, stdev=1241.92, samples=5 00:39:29.176 iops : min= 3418, max= 4188, avg=3952.80, stdev=310.48, samples=5 00:39:29.176 lat (usec) : 250=71.46%, 500=28.42%, 750=0.08%, 1000=0.01% 00:39:29.176 lat (msec) : 2=0.02% 00:39:29.176 cpu : usr=1.24%, sys=5.02%, ctx=11245, majf=0, minf=1 00:39:29.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:29.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:29.176 issued rwts: total=11245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:29.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:29.176 00:39:29.176 Run status group 0 (all jobs): 00:39:29.176 READ: bw=17.8MiB/s (18.6MB/s), 1173KiB/s-15.1MiB/s (1201kB/s-15.8MB/s), io=67.1MiB (70.4MB), run=2912-3776msec 00:39:29.176 00:39:29.176 Disk stats (read/write): 00:39:29.176 nvme0n1: ios=2555/0, merge=0/0, ticks=3267/0, in_queue=3267, util=95.28% 00:39:29.176 nvme0n2: ios=1125/0, merge=0/0, ticks=4024/0, in_queue=4024, util=99.36% 00:39:29.176 nvme0n3: ios=2268/0, merge=0/0, ticks=3012/0, in_queue=3012, util=96.76% 00:39:29.176 nvme0n4: ios=11148/0, merge=0/0, ticks=2639/0, in_queue=2639, util=96.74% 00:39:29.435 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:29.435 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:29.696 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:29.696 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:29.955 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:29.955 13:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:30.212 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:30.212 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 445822 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:30.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:30.778 nvmf hotplug test: fio failed as expected 00:39:30.778 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.037 rmmod nvme_tcp 00:39:31.037 rmmod nvme_fabrics 00:39:31.037 rmmod nvme_keyring 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 443823 ']' 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 443823 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 443823 ']' 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 443823 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 443823 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 443823' 00:39:31.037 killing process with pid 443823 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 443823 00:39:31.037 13:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 443823 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.295 13:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.205 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:33.205 00:39:33.205 real 0m23.630s 00:39:33.205 user 1m7.234s 00:39:33.205 sys 0m9.430s 00:39:33.205 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:33.205 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:33.205 ************************************ 00:39:33.205 END TEST nvmf_fio_target 00:39:33.205 ************************************ 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:33.505 ************************************ 00:39:33.505 START TEST nvmf_bdevio 00:39:33.505 ************************************ 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:33.505 * Looking for test storage... 00:39:33.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:33.505 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.506 --rc genhtml_branch_coverage=1 00:39:33.506 --rc genhtml_function_coverage=1 00:39:33.506 --rc genhtml_legend=1 00:39:33.506 --rc geninfo_all_blocks=1 00:39:33.506 --rc geninfo_unexecuted_blocks=1 00:39:33.506 00:39:33.506 ' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.506 --rc genhtml_branch_coverage=1 00:39:33.506 --rc genhtml_function_coverage=1 00:39:33.506 --rc genhtml_legend=1 00:39:33.506 --rc geninfo_all_blocks=1 00:39:33.506 --rc geninfo_unexecuted_blocks=1 00:39:33.506 00:39:33.506 ' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.506 --rc genhtml_branch_coverage=1 00:39:33.506 --rc genhtml_function_coverage=1 00:39:33.506 --rc genhtml_legend=1 00:39:33.506 --rc geninfo_all_blocks=1 00:39:33.506 --rc geninfo_unexecuted_blocks=1 00:39:33.506 00:39:33.506 ' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:33.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.506 --rc genhtml_branch_coverage=1 00:39:33.506 --rc genhtml_function_coverage=1 00:39:33.506 --rc genhtml_legend=1 00:39:33.506 --rc geninfo_all_blocks=1 00:39:33.506 --rc geninfo_unexecuted_blocks=1 00:39:33.506 00:39:33.506 ' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:33.506 13:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:35.495 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:35.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:35.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:35.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:35.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:35.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:35.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:39:35.754 00:39:35.754 --- 10.0.0.2 ping statistics --- 00:39:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.754 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:35.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:35.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:39:35.754 00:39:35.754 --- 10.0.0.1 ping statistics --- 00:39:35.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.754 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:35.754 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=448543 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 448543 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 448543 ']' 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:35.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:35.755 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.013 [2024-10-14 13:50:27.635450] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:36.013 [2024-10-14 13:50:27.636478] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:36.013 [2024-10-14 13:50:27.636525] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:36.013 [2024-10-14 13:50:27.697468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:36.013 [2024-10-14 13:50:27.741852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:36.013 [2024-10-14 13:50:27.741924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:36.013 [2024-10-14 13:50:27.741967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:36.013 [2024-10-14 13:50:27.741978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:36.013 [2024-10-14 13:50:27.741988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:36.013 [2024-10-14 13:50:27.743447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:36.013 [2024-10-14 13:50:27.743508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:36.013 [2024-10-14 13:50:27.743572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:36.013 [2024-10-14 13:50:27.743574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:36.013 [2024-10-14 13:50:27.823873] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:36.013 [2024-10-14 13:50:27.824086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:36.013 [2024-10-14 13:50:27.824394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:36.013 [2024-10-14 13:50:27.824975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:36.013 [2024-10-14 13:50:27.825256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:36.013 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:36.013 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:36.013 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:36.013 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:36.013 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 [2024-10-14 13:50:27.884293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 Malloc0 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.270 [2024-10-14 13:50:27.952458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:39:36.270 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:36.271 { 00:39:36.271 "params": { 00:39:36.271 "name": "Nvme$subsystem", 00:39:36.271 "trtype": "$TEST_TRANSPORT", 00:39:36.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.271 "adrfam": "ipv4", 00:39:36.271 "trsvcid": "$NVMF_PORT", 00:39:36.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.271 "hdgst": ${hdgst:-false}, 00:39:36.271 "ddgst": ${ddgst:-false} 00:39:36.271 }, 00:39:36.271 "method": "bdev_nvme_attach_controller" 00:39:36.271 } 00:39:36.271 EOF 00:39:36.271 )") 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:39:36.271 13:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:36.271 "params": { 00:39:36.271 "name": "Nvme1", 00:39:36.271 "trtype": "tcp", 00:39:36.271 "traddr": "10.0.0.2", 00:39:36.271 "adrfam": "ipv4", 00:39:36.271 "trsvcid": "4420", 00:39:36.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:36.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:36.271 "hdgst": false, 00:39:36.271 "ddgst": false 00:39:36.271 }, 00:39:36.271 "method": "bdev_nvme_attach_controller" 00:39:36.271 }' 00:39:36.271 [2024-10-14 13:50:28.003553] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:36.271 [2024-10-14 13:50:28.003623] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448659 ] 00:39:36.271 [2024-10-14 13:50:28.064525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:36.271 [2024-10-14 13:50:28.117324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.271 [2024-10-14 13:50:28.117349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:36.271 [2024-10-14 13:50:28.117353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.527 I/O targets: 00:39:36.527 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:36.527 00:39:36.527 00:39:36.527 CUnit - A unit testing framework for C - Version 2.1-3 00:39:36.527 http://cunit.sourceforge.net/ 00:39:36.527 00:39:36.527 00:39:36.527 Suite: bdevio tests on: Nvme1n1 00:39:36.527 Test: blockdev write read block ...passed 00:39:36.785 Test: blockdev write zeroes read block ...passed 00:39:36.785 Test: blockdev write zeroes read no split ...passed 00:39:36.785 Test: blockdev write zeroes read split ...passed 00:39:36.785 Test: blockdev write zeroes read split partial ...passed 00:39:36.785 Test: blockdev reset ...[2024-10-14 13:50:28.442690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:36.785 [2024-10-14 13:50:28.442807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed5b80 (9): Bad file descriptor 00:39:36.785 [2024-10-14 13:50:28.535295] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:36.785 passed 00:39:36.785 Test: blockdev write read 8 blocks ...passed 00:39:36.785 Test: blockdev write read size > 128k ...passed 00:39:36.785 Test: blockdev write read invalid size ...passed 00:39:36.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:36.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:36.785 Test: blockdev write read max offset ...passed 00:39:37.043 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:37.043 Test: blockdev writev readv 8 blocks ...passed 00:39:37.043 Test: blockdev writev readv 30 x 1block ...passed 00:39:37.043 Test: blockdev writev readv block ...passed 00:39:37.043 Test: blockdev writev readv size > 128k ...passed 00:39:37.043 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:37.043 Test: blockdev comparev and writev ...[2024-10-14 13:50:28.791072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.791109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.791141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.791161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.791570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.791595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.791618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.791634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.792038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.792060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.792076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.792479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.792504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.792525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:37.043 [2024-10-14 13:50:28.792541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:37.043 passed 00:39:37.043 Test: blockdev nvme passthru rw ...passed 00:39:37.043 Test: blockdev nvme passthru vendor specific ...[2024-10-14 13:50:28.875425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:37.043 [2024-10-14 13:50:28.875454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.875619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:37.043 [2024-10-14 13:50:28.875643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.875807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:37.043 [2024-10-14 13:50:28.875831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:37.043 [2024-10-14 13:50:28.875988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:37.043 [2024-10-14 13:50:28.876012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:37.043 passed 00:39:37.043 Test: blockdev nvme admin passthru ...passed 00:39:37.301 Test: blockdev copy ...passed 00:39:37.301 00:39:37.301 Run Summary: Type Total Ran Passed Failed Inactive 00:39:37.301 suites 1 1 n/a 0 0 00:39:37.301 tests 23 23 23 0 0 00:39:37.301 asserts 152 152 152 0 n/a 00:39:37.301 00:39:37.301 Elapsed time = 1.201 seconds 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:37.301 rmmod nvme_tcp 00:39:37.301 rmmod nvme_fabrics 00:39:37.301 rmmod nvme_keyring 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 448543 ']' 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 448543 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 448543 ']' 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 448543 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:37.301 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 448543 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 448543' 00:39:37.560 killing process with pid 448543 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 448543 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 448543 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.560 13:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.096 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:40.096 00:39:40.096 real 0m6.336s 00:39:40.096 user 0m8.016s 00:39:40.096 sys 0m2.557s 00:39:40.096 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:40.097 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:40.097 ************************************ 00:39:40.097 END TEST nvmf_bdevio 00:39:40.097 ************************************ 00:39:40.097 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:40.097 00:39:40.097 real 3m53.785s 00:39:40.097 user 8m50.669s 00:39:40.097 sys 1m23.809s 00:39:40.097 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:40.097 13:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:40.097 ************************************ 00:39:40.097 END TEST nvmf_target_core_interrupt_mode 00:39:40.097 ************************************ 00:39:40.097 13:50:31 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:40.097 13:50:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:40.097 13:50:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:40.097 13:50:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:40.097 ************************************ 00:39:40.097 START TEST nvmf_interrupt 00:39:40.097 ************************************ 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:40.097 * Looking for test storage... 00:39:40.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:40.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.097 --rc genhtml_branch_coverage=1 00:39:40.097 --rc genhtml_function_coverage=1 00:39:40.097 --rc genhtml_legend=1 00:39:40.097 --rc geninfo_all_blocks=1 00:39:40.097 --rc geninfo_unexecuted_blocks=1 00:39:40.097 00:39:40.097 ' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:40.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.097 --rc genhtml_branch_coverage=1 00:39:40.097 --rc genhtml_function_coverage=1 00:39:40.097 --rc genhtml_legend=1 00:39:40.097 --rc geninfo_all_blocks=1 00:39:40.097 --rc geninfo_unexecuted_blocks=1 00:39:40.097 00:39:40.097 ' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:40.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.097 --rc genhtml_branch_coverage=1 00:39:40.097 --rc genhtml_function_coverage=1 00:39:40.097 --rc genhtml_legend=1 00:39:40.097 --rc geninfo_all_blocks=1 00:39:40.097 --rc geninfo_unexecuted_blocks=1 00:39:40.097 00:39:40.097 ' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:40.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.097 --rc genhtml_branch_coverage=1 00:39:40.097 --rc genhtml_function_coverage=1 00:39:40.097 --rc genhtml_legend=1 00:39:40.097 --rc geninfo_all_blocks=1 00:39:40.097 --rc geninfo_unexecuted_blocks=1 00:39:40.097 00:39:40.097 ' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:40.097 13:50:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:40.098 13:50:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:41.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.997 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:41.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:41.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:41.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:41.998 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:42.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:42.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:39:42.257 00:39:42.257 --- 10.0.0.2 ping statistics --- 00:39:42.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.257 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:42.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:42.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:39:42.257 00:39:42.257 --- 10.0.0.1 ping statistics --- 00:39:42.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:42.257 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=450775 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 450775 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 450775 ']' 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:42.257 13:50:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.257 [2024-10-14 13:50:33.972187] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:42.257 [2024-10-14 13:50:33.973256] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:42.257 [2024-10-14 13:50:33.973310] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.257 [2024-10-14 13:50:34.039301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:42.257 [2024-10-14 13:50:34.084528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:42.257 [2024-10-14 13:50:34.084580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:42.257 [2024-10-14 13:50:34.084609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:42.257 [2024-10-14 13:50:34.084621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:42.257 [2024-10-14 13:50:34.084630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:42.257 [2024-10-14 13:50:34.086035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:42.257 [2024-10-14 13:50:34.086040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.516 [2024-10-14 13:50:34.169565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:42.516 [2024-10-14 13:50:34.169602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:42.516 [2024-10-14 13:50:34.169853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:42.516 5000+0 records in 00:39:42.516 5000+0 records out 00:39:42.516 10240000 bytes (10 MB, 9.8 MiB) copied, 0.014682 s, 697 MB/s 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 AIO0 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 [2024-10-14 13:50:34.286729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:42.516 [2024-10-14 13:50:34.310964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 450775 0 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 0 idle 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:42.516 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450775 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450775 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 450775 1 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 1 idle 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:42.774 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:42.775 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:42.775 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450782 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450782 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=450819 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 450775 0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 450775 0 busy 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450775 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450775 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:43.033 13:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450775 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:02.55 reactor_0' 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450775 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:02.55 reactor_0 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:44.407 13:50:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 450775 1 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 450775 1 busy 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:44.407 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450782 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:01.31 reactor_1' 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450782 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:01.31 reactor_1 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:44.408 13:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 450819 00:39:54.378 Initializing NVMe Controllers 00:39:54.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:54.379 Controller IO queue size 256, less than required. 00:39:54.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:54.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:54.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:54.379 Initialization complete. Launching workers. 00:39:54.379 ======================================================== 00:39:54.379 Latency(us) 00:39:54.379 Device Information : IOPS MiB/s Average min max 00:39:54.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13309.91 51.99 19246.96 4558.32 23413.65 00:39:54.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13418.41 52.42 19091.85 3967.77 22119.96 00:39:54.379 ======================================================== 00:39:54.379 Total : 26728.33 104.41 19169.09 3967.77 23413.65 00:39:54.379 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 450775 0 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 0 idle 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:54.379 13:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450775 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:20.19 reactor_0' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450775 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:20.19 reactor_0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 450775 1 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 1 idle 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450782 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:09.97 reactor_1' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450782 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:09.97 reactor_1 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:54.379 13:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 450775 0 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 0 idle 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:55.752 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:55.753 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450775 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:20.29 reactor_0' 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450775 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:20.29 reactor_0 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 450775 1 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 450775 1 idle 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=450775 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 450775 -w 256 00:39:56.010 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 450782 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:10.00 reactor_1' 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 450782 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:10.00 reactor_1 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:56.277 13:50:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:56.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:56.277 rmmod nvme_tcp 00:39:56.277 rmmod nvme_fabrics 00:39:56.277 rmmod nvme_keyring 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 450775 ']' 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 450775 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 450775 ']' 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 450775 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:56.277 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450775 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450775' 00:39:56.541 killing process with pid 450775 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 450775 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 450775 00:39:56.541 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:56.542 13:50:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.077 13:50:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:59.077 00:39:59.077 real 0m18.908s 00:39:59.077 user 0m36.744s 00:39:59.077 sys 0m6.867s 00:39:59.077 13:50:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:59.077 13:50:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.077 ************************************ 00:39:59.077 END TEST nvmf_interrupt 00:39:59.077 ************************************ 00:39:59.077 00:39:59.077 real 32m45.149s 00:39:59.077 user 86m50.919s 00:39:59.077 sys 7m57.522s 00:39:59.077 13:50:50 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:59.077 13:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:59.077 ************************************ 00:39:59.077 END TEST nvmf_tcp 00:39:59.077 ************************************ 00:39:59.077 13:50:50 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:39:59.077 13:50:50 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:59.077 13:50:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:59.077 13:50:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:59.077 13:50:50 -- common/autotest_common.sh@10 -- # set +x 00:39:59.077 ************************************ 00:39:59.077 START TEST spdkcli_nvmf_tcp 00:39:59.077 ************************************ 00:39:59.077 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:59.077 * Looking for test storage... 00:39:59.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:59.077 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.078 --rc genhtml_branch_coverage=1 00:39:59.078 --rc genhtml_function_coverage=1 00:39:59.078 --rc genhtml_legend=1 00:39:59.078 --rc geninfo_all_blocks=1 00:39:59.078 --rc geninfo_unexecuted_blocks=1 00:39:59.078 00:39:59.078 ' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.078 --rc genhtml_branch_coverage=1 00:39:59.078 --rc genhtml_function_coverage=1 00:39:59.078 --rc genhtml_legend=1 00:39:59.078 --rc geninfo_all_blocks=1 00:39:59.078 --rc geninfo_unexecuted_blocks=1 00:39:59.078 00:39:59.078 ' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.078 --rc genhtml_branch_coverage=1 00:39:59.078 --rc genhtml_function_coverage=1 00:39:59.078 --rc genhtml_legend=1 00:39:59.078 --rc geninfo_all_blocks=1 00:39:59.078 --rc geninfo_unexecuted_blocks=1 00:39:59.078 00:39:59.078 ' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:59.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:59.078 --rc genhtml_branch_coverage=1 00:39:59.078 --rc genhtml_function_coverage=1 00:39:59.078 --rc genhtml_legend=1 00:39:59.078 --rc geninfo_all_blocks=1 00:39:59.078 --rc geninfo_unexecuted_blocks=1 00:39:59.078 00:39:59.078 ' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:59.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=452827 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 452827 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 452827 ']' 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:59.078 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:59.078 [2024-10-14 13:50:50.683771] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:39:59.078 [2024-10-14 13:50:50.683854] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452827 ] 00:39:59.078 [2024-10-14 13:50:50.746212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:59.078 [2024-10-14 13:50:50.792912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.078 [2024-10-14 13:50:50.792916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:59.336 13:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:59.336 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:59.336 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:59.336 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:59.337 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:59.337 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:59.337 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:59.337 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:59.337 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:59.337 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:59.337 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:59.337 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:59.337 ' 00:40:01.865 [2024-10-14 13:50:53.682474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.238 [2024-10-14 13:50:54.954926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:05.764 [2024-10-14 13:50:57.302419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:07.661 [2024-10-14 13:50:59.324572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:09.035 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:09.035 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:09.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:09.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:09.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:09.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:09.035 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:09.292 13:51:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:09.292 13:51:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:09.292 13:51:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.292 13:51:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:09.292 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:09.292 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.292 13:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:09.292 13:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:09.857 13:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:09.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:09.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:09.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:09.857 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:09.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:09.858 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:09.858 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:09.858 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:09.858 ' 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:15.119 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:15.119 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:15.119 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:15.119 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 452827 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 452827 ']' 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 452827 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 452827 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 452827' 00:40:15.119 killing process with pid 452827 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 452827 00:40:15.119 13:51:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 452827 00:40:15.377 13:51:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 452827 ']' 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 452827 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 452827 ']' 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 452827 00:40:15.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (452827) - No such process 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 452827 is not found' 00:40:15.378 Process with pid 452827 is not found 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:15.378 00:40:15.378 real 0m16.652s 00:40:15.378 user 0m35.487s 00:40:15.378 sys 0m0.852s 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:15.378 13:51:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.378 ************************************ 00:40:15.378 END TEST spdkcli_nvmf_tcp 00:40:15.378 ************************************ 00:40:15.378 13:51:07 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:15.378 13:51:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:15.378 13:51:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:15.378 13:51:07 -- common/autotest_common.sh@10 -- # set +x 00:40:15.378 ************************************ 00:40:15.378 START TEST nvmf_identify_passthru 00:40:15.378 ************************************ 00:40:15.378 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:15.636 * Looking for test storage... 00:40:15.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:15.636 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:15.636 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:40:15.636 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:15.636 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.636 13:51:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.637 --rc genhtml_branch_coverage=1 00:40:15.637 --rc genhtml_function_coverage=1 00:40:15.637 --rc genhtml_legend=1 00:40:15.637 --rc geninfo_all_blocks=1 00:40:15.637 --rc geninfo_unexecuted_blocks=1 00:40:15.637 00:40:15.637 ' 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.637 --rc genhtml_branch_coverage=1 00:40:15.637 --rc genhtml_function_coverage=1 00:40:15.637 --rc genhtml_legend=1 00:40:15.637 --rc geninfo_all_blocks=1 00:40:15.637 --rc geninfo_unexecuted_blocks=1 00:40:15.637 00:40:15.637 ' 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.637 --rc genhtml_branch_coverage=1 00:40:15.637 --rc genhtml_function_coverage=1 00:40:15.637 --rc genhtml_legend=1 00:40:15.637 --rc geninfo_all_blocks=1 00:40:15.637 --rc geninfo_unexecuted_blocks=1 00:40:15.637 00:40:15.637 ' 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:15.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.637 --rc genhtml_branch_coverage=1 00:40:15.637 --rc genhtml_function_coverage=1 00:40:15.637 --rc genhtml_legend=1 00:40:15.637 --rc geninfo_all_blocks=1 00:40:15.637 --rc geninfo_unexecuted_blocks=1 00:40:15.637 00:40:15.637 ' 00:40:15.637 13:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:15.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.637 13:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:15.637 13:51:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.637 13:51:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:15.637 13:51:07 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:15.637 13:51:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:17.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:17.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:17.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:17.539 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:17.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:17.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:17.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:40:17.540 00:40:17.540 --- 10.0.0.2 ping statistics --- 00:40:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.540 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:17.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:40:17.540 00:40:17.540 --- 10.0.0.1 ping statistics --- 00:40:17.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.540 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:17.540 13:51:09 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:17.540 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:17.540 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:17.540 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:40:17.798 13:51:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:17.798 13:51:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:21.986 13:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:21.986 13:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:21.986 13:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:21.986 13:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=458071 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:26.171 13:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 458071 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 458071 ']' 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:26.171 13:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.171 [2024-10-14 13:51:17.946159] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:40:26.172 [2024-10-14 13:51:17.946254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.172 [2024-10-14 13:51:18.013676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:26.429 [2024-10-14 13:51:18.060752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.429 [2024-10-14 13:51:18.060807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.429 [2024-10-14 13:51:18.060834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.429 [2024-10-14 13:51:18.060846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.429 [2024-10-14 13:51:18.060855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.429 [2024-10-14 13:51:18.062327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.429 [2024-10-14 13:51:18.062351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.429 [2024-10-14 13:51:18.062413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:26.429 [2024-10-14 13:51:18.062416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:26.429 13:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.429 INFO: Log level set to 20 00:40:26.429 INFO: Requests: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "method": "nvmf_set_config", 00:40:26.429 "id": 1, 00:40:26.429 "params": { 00:40:26.429 "admin_cmd_passthru": { 00:40:26.429 "identify_ctrlr": true 00:40:26.429 } 00:40:26.429 } 00:40:26.429 } 00:40:26.429 00:40:26.429 INFO: response: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "id": 1, 00:40:26.429 "result": true 00:40:26.429 } 00:40:26.429 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.429 13:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.429 INFO: Setting log level to 20 00:40:26.429 INFO: Setting log level to 20 00:40:26.429 INFO: Log level set to 20 00:40:26.429 INFO: Log level set to 20 00:40:26.429 INFO: Requests: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "method": "framework_start_init", 00:40:26.429 "id": 1 00:40:26.429 } 00:40:26.429 00:40:26.429 INFO: Requests: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "method": "framework_start_init", 00:40:26.429 "id": 1 00:40:26.429 } 00:40:26.429 00:40:26.429 [2024-10-14 13:51:18.270267] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:26.429 INFO: response: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "id": 1, 00:40:26.429 "result": true 00:40:26.429 } 00:40:26.429 00:40:26.429 INFO: response: 00:40:26.429 { 00:40:26.429 "jsonrpc": "2.0", 00:40:26.429 "id": 1, 00:40:26.429 "result": true 00:40:26.429 } 00:40:26.429 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.429 13:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.429 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.429 INFO: Setting log level to 40 00:40:26.429 INFO: Setting log level to 40 00:40:26.429 INFO: Setting log level to 40 00:40:26.429 [2024-10-14 13:51:18.280474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:26.687 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.687 13:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:26.687 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.687 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:26.687 13:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:26.687 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.687 13:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.964 Nvme0n1 00:40:29.964 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.964 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:29.964 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.964 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.964 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.964 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.965 [2024-10-14 13:51:21.182773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.965 [ 00:40:29.965 { 00:40:29.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:29.965 "subtype": "Discovery", 00:40:29.965 "listen_addresses": [], 00:40:29.965 "allow_any_host": true, 00:40:29.965 "hosts": [] 00:40:29.965 }, 00:40:29.965 { 00:40:29.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.965 "subtype": "NVMe", 00:40:29.965 "listen_addresses": [ 00:40:29.965 { 00:40:29.965 "trtype": "TCP", 00:40:29.965 "adrfam": "IPv4", 00:40:29.965 "traddr": "10.0.0.2", 00:40:29.965 "trsvcid": "4420" 00:40:29.965 } 00:40:29.965 ], 00:40:29.965 "allow_any_host": true, 00:40:29.965 "hosts": [], 00:40:29.965 "serial_number": "SPDK00000000000001", 00:40:29.965 "model_number": "SPDK bdev Controller", 00:40:29.965 "max_namespaces": 1, 00:40:29.965 "min_cntlid": 1, 00:40:29.965 "max_cntlid": 65519, 00:40:29.965 "namespaces": [ 00:40:29.965 { 00:40:29.965 "nsid": 1, 00:40:29.965 "bdev_name": "Nvme0n1", 00:40:29.965 "name": "Nvme0n1", 00:40:29.965 "nguid": "E422845EB4734BFBA8DCA8A890FC38CB", 00:40:29.965 "uuid": "e422845e-b473-4bfb-a8dc-a8a890fc38cb" 00:40:29.965 } 00:40:29.965 ] 00:40:29.965 } 00:40:29.965 ] 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:29.965 13:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.965 rmmod nvme_tcp 00:40:29.965 rmmod nvme_fabrics 00:40:29.965 rmmod nvme_keyring 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 458071 ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 458071 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 458071 ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 458071 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 458071 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 458071' 00:40:29.965 killing process with pid 458071 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 458071 00:40:29.965 13:51:21 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 458071 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:31.863 13:51:23 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.863 13:51:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:31.863 13:51:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.765 13:51:25 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:33.765 00:40:33.765 real 0m18.112s 00:40:33.765 user 0m27.234s 00:40:33.765 sys 0m2.380s 00:40:33.765 13:51:25 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:33.765 13:51:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:33.765 ************************************ 00:40:33.765 END TEST nvmf_identify_passthru 00:40:33.765 ************************************ 00:40:33.765 13:51:25 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:33.765 13:51:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:33.765 13:51:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:33.765 13:51:25 -- common/autotest_common.sh@10 -- # set +x 00:40:33.765 ************************************ 00:40:33.765 START TEST nvmf_dif 00:40:33.765 ************************************ 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:33.765 * Looking for test storage... 00:40:33.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.765 --rc genhtml_branch_coverage=1 00:40:33.765 --rc genhtml_function_coverage=1 00:40:33.765 --rc genhtml_legend=1 00:40:33.765 --rc geninfo_all_blocks=1 00:40:33.765 --rc geninfo_unexecuted_blocks=1 00:40:33.765 00:40:33.765 ' 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.765 --rc genhtml_branch_coverage=1 00:40:33.765 --rc genhtml_function_coverage=1 00:40:33.765 --rc genhtml_legend=1 00:40:33.765 --rc geninfo_all_blocks=1 00:40:33.765 --rc geninfo_unexecuted_blocks=1 00:40:33.765 00:40:33.765 ' 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.765 --rc genhtml_branch_coverage=1 00:40:33.765 --rc genhtml_function_coverage=1 00:40:33.765 --rc genhtml_legend=1 00:40:33.765 --rc geninfo_all_blocks=1 00:40:33.765 --rc geninfo_unexecuted_blocks=1 00:40:33.765 00:40:33.765 ' 00:40:33.765 13:51:25 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:33.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.765 --rc genhtml_branch_coverage=1 00:40:33.765 --rc genhtml_function_coverage=1 00:40:33.765 --rc genhtml_legend=1 00:40:33.765 --rc geninfo_all_blocks=1 00:40:33.765 --rc geninfo_unexecuted_blocks=1 00:40:33.765 00:40:33.765 ' 00:40:33.765 13:51:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:33.765 13:51:25 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:33.765 13:51:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:33.766 13:51:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:33.766 13:51:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.766 13:51:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.766 13:51:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.766 13:51:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:33.766 13:51:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:33.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:33.766 13:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:33.766 13:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:33.766 13:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:33.766 13:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:33.766 13:51:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.766 13:51:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:33.766 13:51:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:33.766 13:51:25 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:33.766 13:51:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:35.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:35.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:35.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:35.669 13:51:27 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:35.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:35.670 13:51:27 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:35.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:40:35.928 00:40:35.928 --- 10.0.0.2 ping statistics --- 00:40:35.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.928 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:35.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:40:35.928 00:40:35.928 --- 10.0.0.1 ping statistics --- 00:40:35.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.928 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:40:35.928 13:51:27 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:37.303 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:37.303 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:37.303 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:37.303 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:37.303 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:37.303 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:37.303 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:37.303 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:37.303 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:37.303 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:37.303 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:37.303 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:37.303 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:37.303 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:37.303 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:37.303 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:37.303 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:37.303 13:51:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:37.303 13:51:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=461340 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:37.303 13:51:29 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 461340 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 461340 ']' 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:37.303 13:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.303 [2024-10-14 13:51:29.118810] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:40:37.303 [2024-10-14 13:51:29.118885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.564 [2024-10-14 13:51:29.188479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.564 [2024-10-14 13:51:29.237611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.564 [2024-10-14 13:51:29.237694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.564 [2024-10-14 13:51:29.237723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.564 [2024-10-14 13:51:29.237735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.564 [2024-10-14 13:51:29.237745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.564 [2024-10-14 13:51:29.238393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.564 13:51:29 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:37.564 13:51:29 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:37.565 13:51:29 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.565 13:51:29 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.565 13:51:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:37.565 13:51:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.565 [2024-10-14 13:51:29.392646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.565 13:51:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:37.565 13:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 ************************************ 00:40:37.823 START TEST fio_dif_1_default 00:40:37.823 ************************************ 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 bdev_null0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:37.823 [2024-10-14 13:51:29.452981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:37.823 { 00:40:37.823 "params": { 00:40:37.823 "name": "Nvme$subsystem", 00:40:37.823 "trtype": "$TEST_TRANSPORT", 00:40:37.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:37.823 "adrfam": "ipv4", 00:40:37.823 "trsvcid": "$NVMF_PORT", 00:40:37.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:37.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:37.823 "hdgst": ${hdgst:-false}, 00:40:37.823 "ddgst": ${ddgst:-false} 00:40:37.823 }, 00:40:37.823 "method": "bdev_nvme_attach_controller" 00:40:37.823 } 00:40:37.823 EOF 00:40:37.823 )") 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:37.823 "params": { 00:40:37.823 "name": "Nvme0", 00:40:37.823 "trtype": "tcp", 00:40:37.823 "traddr": "10.0.0.2", 00:40:37.823 "adrfam": "ipv4", 00:40:37.823 "trsvcid": "4420", 00:40:37.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:37.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:37.823 "hdgst": false, 00:40:37.823 "ddgst": false 00:40:37.823 }, 00:40:37.823 "method": "bdev_nvme_attach_controller" 00:40:37.823 }' 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:37.823 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:37.824 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:37.824 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:37.824 13:51:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:38.082 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:38.082 fio-3.35 00:40:38.082 Starting 1 thread 00:40:50.283 00:40:50.283 filename0: (groupid=0, jobs=1): err= 0: pid=461564: Mon Oct 14 13:51:40 2024 00:40:50.283 read: IOPS=100, BW=401KiB/s (411kB/s)(4016KiB/10017msec) 00:40:50.283 slat (nsec): min=4358, max=51896, avg=8823.29, stdev=3079.77 00:40:50.283 clat (usec): min=571, max=46747, avg=39879.87, stdev=6659.35 00:40:50.283 lat (usec): min=578, max=46765, avg=39888.69, stdev=6658.81 00:40:50.283 clat percentiles (usec): 00:40:50.283 | 1.00th=[ 603], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:50.283 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:50.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:50.283 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:40:50.283 | 99.99th=[46924] 00:40:50.283 bw ( KiB/s): min= 384, max= 448, per=99.77%, avg=400.00, stdev=22.02, samples=20 00:40:50.283 iops : min= 96, max= 112, avg=100.00, stdev= 5.51, samples=20 00:40:50.283 lat (usec) : 750=2.39%, 1000=0.40% 00:40:50.283 lat (msec) : 50=97.21% 00:40:50.283 cpu : usr=91.45%, sys=8.27%, ctx=13, majf=0, minf=167 00:40:50.283 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:50.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.283 issued rwts: total=1004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.283 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:50.283 00:40:50.283 Run status group 0 (all jobs): 00:40:50.283 READ: bw=401KiB/s (411kB/s), 401KiB/s-401KiB/s (411kB/s-411kB/s), io=4016KiB (4112kB), run=10017-10017msec 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.283 00:40:50.283 real 0m11.158s 00:40:50.283 user 0m10.262s 00:40:50.283 sys 0m1.149s 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:50.283 13:51:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 ************************************ 00:40:50.284 END TEST fio_dif_1_default 00:40:50.284 ************************************ 00:40:50.284 13:51:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:50.284 13:51:40 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:50.284 13:51:40 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 ************************************ 00:40:50.284 START TEST fio_dif_1_multi_subsystems 00:40:50.284 ************************************ 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 bdev_null0 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 [2024-10-14 13:51:40.662664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 bdev_null1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:50.284 { 00:40:50.284 "params": { 00:40:50.284 "name": "Nvme$subsystem", 00:40:50.284 "trtype": "$TEST_TRANSPORT", 00:40:50.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.284 "adrfam": "ipv4", 00:40:50.284 "trsvcid": "$NVMF_PORT", 00:40:50.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.284 "hdgst": ${hdgst:-false}, 00:40:50.284 "ddgst": ${ddgst:-false} 00:40:50.284 }, 00:40:50.284 "method": "bdev_nvme_attach_controller" 00:40:50.284 } 00:40:50.284 EOF 00:40:50.284 )") 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:50.284 { 00:40:50.284 "params": { 00:40:50.284 "name": "Nvme$subsystem", 00:40:50.284 "trtype": "$TEST_TRANSPORT", 00:40:50.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.284 "adrfam": "ipv4", 00:40:50.284 "trsvcid": "$NVMF_PORT", 00:40:50.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.284 "hdgst": ${hdgst:-false}, 00:40:50.284 "ddgst": ${ddgst:-false} 00:40:50.284 }, 00:40:50.284 "method": "bdev_nvme_attach_controller" 00:40:50.284 } 00:40:50.284 EOF 00:40:50.284 )") 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:50.284 "params": { 00:40:50.284 "name": "Nvme0", 00:40:50.284 "trtype": "tcp", 00:40:50.284 "traddr": "10.0.0.2", 00:40:50.284 "adrfam": "ipv4", 00:40:50.284 "trsvcid": "4420", 00:40:50.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:50.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:50.284 "hdgst": false, 00:40:50.284 "ddgst": false 00:40:50.284 }, 00:40:50.284 "method": "bdev_nvme_attach_controller" 00:40:50.284 },{ 00:40:50.284 "params": { 00:40:50.284 "name": "Nvme1", 00:40:50.284 "trtype": "tcp", 00:40:50.284 "traddr": "10.0.0.2", 00:40:50.284 "adrfam": "ipv4", 00:40:50.284 "trsvcid": "4420", 00:40:50.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:50.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:50.284 "hdgst": false, 00:40:50.284 "ddgst": false 00:40:50.284 }, 00:40:50.284 "method": "bdev_nvme_attach_controller" 00:40:50.284 }' 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:50.284 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:50.285 13:51:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.285 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:50.285 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:50.285 fio-3.35 00:40:50.285 Starting 2 threads 00:41:00.247 00:41:00.247 filename0: (groupid=0, jobs=1): err= 0: pid=462965: Mon Oct 14 13:51:51 2024 00:41:00.247 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10027msec) 00:41:00.247 slat (nsec): min=6853, max=68554, avg=9795.42, stdev=2934.38 00:41:00.247 clat (usec): min=519, max=46385, avg=20307.29, stdev=20347.54 00:41:00.247 lat (usec): min=527, max=46412, avg=20317.08, stdev=20347.33 00:41:00.247 clat percentiles (usec): 00:41:00.247 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 594], 00:41:00.247 | 30.00th=[ 627], 40.00th=[ 660], 50.00th=[ 971], 60.00th=[41157], 00:41:00.247 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:00.247 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:41:00.247 | 99.99th=[46400] 00:41:00.247 bw ( KiB/s): min= 704, max= 896, per=49.18%, avg=787.20, stdev=55.33, samples=20 00:41:00.247 iops : min= 176, max= 224, avg=196.80, stdev=13.83, samples=20 00:41:00.247 lat (usec) : 750=44.02%, 1000=6.85% 00:41:00.247 lat (msec) : 2=0.86%, 50=48.28% 00:41:00.247 cpu : usr=94.81%, sys=4.90%, ctx=15, majf=0, minf=208 00:41:00.247 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.247 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.247 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:00.247 filename1: (groupid=0, jobs=1): err= 0: pid=462966: Mon Oct 14 13:51:51 2024 00:41:00.247 read: IOPS=203, BW=815KiB/s (834kB/s)(8176KiB/10038msec) 00:41:00.247 slat (nsec): min=7190, max=27574, avg=9827.11, stdev=2623.88 00:41:00.247 clat (usec): min=544, max=46388, avg=19612.35, stdev=20285.54 00:41:00.247 lat (usec): min=552, max=46416, avg=19622.18, stdev=20285.41 00:41:00.247 clat percentiles (usec): 00:41:00.247 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 627], 00:41:00.247 | 30.00th=[ 660], 40.00th=[ 709], 50.00th=[ 914], 60.00th=[41157], 00:41:00.247 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:00.247 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:41:00.247 | 99.99th=[46400] 00:41:00.247 bw ( KiB/s): min= 704, max= 896, per=50.99%, avg=816.00, stdev=54.44, samples=20 00:41:00.247 iops : min= 176, max= 224, avg=204.00, stdev=13.61, samples=20 00:41:00.247 lat (usec) : 750=43.84%, 1000=8.81% 00:41:00.247 lat (msec) : 2=0.78%, 50=46.58% 00:41:00.247 cpu : usr=94.93%, sys=4.78%, ctx=10, majf=0, minf=113 00:41:00.247 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.247 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.247 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:00.247 00:41:00.247 Run status group 0 (all jobs): 00:41:00.247 READ: bw=1600KiB/s (1639kB/s), 787KiB/s-815KiB/s (806kB/s-834kB/s), io=15.7MiB (16.4MB), run=10027-10038msec 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 00:41:00.247 real 0m11.379s 00:41:00.247 user 0m20.384s 00:41:00.247 sys 0m1.312s 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 ************************************ 00:41:00.247 END TEST fio_dif_1_multi_subsystems 00:41:00.247 ************************************ 00:41:00.247 13:51:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:00.247 13:51:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:00.247 13:51:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 ************************************ 00:41:00.247 START TEST fio_dif_rand_params 00:41:00.247 ************************************ 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 bdev_null0 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.247 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:00.247 [2024-10-14 13:51:52.085385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:00.248 { 00:41:00.248 "params": { 00:41:00.248 "name": "Nvme$subsystem", 00:41:00.248 "trtype": "$TEST_TRANSPORT", 00:41:00.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:00.248 "adrfam": "ipv4", 00:41:00.248 "trsvcid": "$NVMF_PORT", 00:41:00.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:00.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:00.248 "hdgst": ${hdgst:-false}, 00:41:00.248 "ddgst": ${ddgst:-false} 00:41:00.248 }, 00:41:00.248 "method": "bdev_nvme_attach_controller" 00:41:00.248 } 00:41:00.248 EOF 00:41:00.248 )") 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:00.248 13:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:00.248 "params": { 00:41:00.248 "name": "Nvme0", 00:41:00.248 "trtype": "tcp", 00:41:00.248 "traddr": "10.0.0.2", 00:41:00.248 "adrfam": "ipv4", 00:41:00.248 "trsvcid": "4420", 00:41:00.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:00.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:00.248 "hdgst": false, 00:41:00.248 "ddgst": false 00:41:00.248 }, 00:41:00.248 "method": "bdev_nvme_attach_controller" 00:41:00.248 }' 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:00.509 13:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:00.509 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:00.509 ... 00:41:00.509 fio-3.35 00:41:00.509 Starting 3 threads 00:41:07.188 00:41:07.188 filename0: (groupid=0, jobs=1): err= 0: pid=464361: Mon Oct 14 13:51:58 2024 00:41:07.188 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(146MiB/5004msec) 00:41:07.188 slat (nsec): min=4785, max=40734, avg=14287.12, stdev=2956.97 00:41:07.188 clat (usec): min=6746, max=52620, avg=12842.39, stdev=3921.79 00:41:07.188 lat (usec): min=6761, max=52633, avg=12856.68, stdev=3921.87 00:41:07.188 clat percentiles (usec): 00:41:07.188 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10814], 00:41:07.188 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:41:07.188 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15139], 95.00th=[16057], 00:41:07.188 | 99.00th=[17695], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:41:07.188 | 99.99th=[52691] 00:41:07.188 bw ( KiB/s): min=25600, max=33024, per=33.30%, avg=29824.00, stdev=2209.62, samples=10 00:41:07.188 iops : min= 200, max= 258, avg=233.00, stdev=17.26, samples=10 00:41:07.188 lat (msec) : 10=6.43%, 20=92.80%, 100=0.77% 00:41:07.188 cpu : usr=93.56%, sys=5.90%, ctx=10, majf=0, minf=96 00:41:07.188 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:07.188 filename0: (groupid=0, jobs=1): err= 0: pid=464362: Mon Oct 14 13:51:58 2024 00:41:07.188 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(154MiB/5044msec) 00:41:07.188 slat (nsec): min=4968, max=40862, avg=14261.24, stdev=2709.28 00:41:07.188 clat (usec): min=6912, max=52336, avg=12271.23, stdev=2321.24 00:41:07.188 lat (usec): min=6925, max=52349, avg=12285.49, stdev=2321.21 00:41:07.188 clat percentiles (usec): 00:41:07.188 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10814], 00:41:07.188 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:41:07.188 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14615], 95.00th=[15139], 00:41:07.188 | 99.00th=[16319], 99.50th=[16712], 99.90th=[46924], 99.95th=[52167], 00:41:07.188 | 99.99th=[52167] 00:41:07.188 bw ( KiB/s): min=29440, max=33792, per=35.02%, avg=31366.30, stdev=1562.54, samples=10 00:41:07.188 iops : min= 230, max= 264, avg=245.00, stdev=12.19, samples=10 00:41:07.188 lat (msec) : 10=8.63%, 20=91.21%, 50=0.08%, 100=0.08% 00:41:07.188 cpu : usr=93.38%, sys=6.13%, ctx=12, majf=0, minf=108 00:41:07.188 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 issued rwts: total=1228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:07.188 filename0: (groupid=0, jobs=1): err= 0: pid=464363: Mon Oct 14 13:51:58 2024 00:41:07.188 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(142MiB/5044msec) 00:41:07.188 slat (nsec): min=4473, max=40996, avg=15489.48, stdev=3863.45 00:41:07.188 clat (usec): min=7354, max=53173, avg=13287.06, stdev=3768.21 00:41:07.188 lat (usec): min=7377, max=53189, avg=13302.55, stdev=3768.13 00:41:07.188 clat percentiles (usec): 00:41:07.188 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:41:07.188 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:41:07.188 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15533], 95.00th=[16450], 00:41:07.188 | 99.00th=[17957], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:41:07.188 | 99.99th=[53216] 00:41:07.188 bw ( KiB/s): min=26368, max=30976, per=32.36%, avg=28979.20, stdev=1467.14, samples=10 00:41:07.188 iops : min= 206, max= 242, avg=226.40, stdev=11.46, samples=10 00:41:07.188 lat (msec) : 10=4.67%, 20=94.36%, 50=0.44%, 100=0.53% 00:41:07.188 cpu : usr=93.65%, sys=5.81%, ctx=15, majf=0, minf=81 00:41:07.188 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.188 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:07.188 00:41:07.188 Run status group 0 (all jobs): 00:41:07.188 READ: bw=87.5MiB/s (91.7MB/s), 28.1MiB/s-30.4MiB/s (29.5MB/s-31.9MB/s), io=441MiB (463MB), run=5004-5044msec 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:07.188 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 bdev_null0 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 [2024-10-14 13:51:58.384760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 bdev_null1 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 bdev_null2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:07.189 { 00:41:07.189 "params": { 00:41:07.189 "name": "Nvme$subsystem", 00:41:07.189 "trtype": "$TEST_TRANSPORT", 00:41:07.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.189 "adrfam": "ipv4", 00:41:07.189 "trsvcid": "$NVMF_PORT", 00:41:07.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.189 "hdgst": ${hdgst:-false}, 00:41:07.189 "ddgst": ${ddgst:-false} 00:41:07.189 }, 00:41:07.189 "method": "bdev_nvme_attach_controller" 00:41:07.189 } 00:41:07.189 EOF 00:41:07.189 )") 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:07.189 { 00:41:07.189 "params": { 00:41:07.189 "name": "Nvme$subsystem", 00:41:07.189 "trtype": "$TEST_TRANSPORT", 00:41:07.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.189 "adrfam": "ipv4", 00:41:07.189 "trsvcid": "$NVMF_PORT", 00:41:07.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.189 "hdgst": ${hdgst:-false}, 00:41:07.189 "ddgst": ${ddgst:-false} 00:41:07.189 }, 00:41:07.189 "method": "bdev_nvme_attach_controller" 00:41:07.189 } 00:41:07.189 EOF 00:41:07.189 )") 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:07.189 { 00:41:07.189 "params": { 00:41:07.189 "name": "Nvme$subsystem", 00:41:07.189 "trtype": "$TEST_TRANSPORT", 00:41:07.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.189 "adrfam": "ipv4", 00:41:07.189 "trsvcid": "$NVMF_PORT", 00:41:07.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.189 "hdgst": ${hdgst:-false}, 00:41:07.189 "ddgst": ${ddgst:-false} 00:41:07.189 }, 00:41:07.189 "method": "bdev_nvme_attach_controller" 00:41:07.189 } 00:41:07.189 EOF 00:41:07.189 )") 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:07.189 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:07.190 "params": { 00:41:07.190 "name": "Nvme0", 00:41:07.190 "trtype": "tcp", 00:41:07.190 "traddr": "10.0.0.2", 00:41:07.190 "adrfam": "ipv4", 00:41:07.190 "trsvcid": "4420", 00:41:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:07.190 "hdgst": false, 00:41:07.190 "ddgst": false 00:41:07.190 }, 00:41:07.190 "method": "bdev_nvme_attach_controller" 00:41:07.190 },{ 00:41:07.190 "params": { 00:41:07.190 "name": "Nvme1", 00:41:07.190 "trtype": "tcp", 00:41:07.190 "traddr": "10.0.0.2", 00:41:07.190 "adrfam": "ipv4", 00:41:07.190 "trsvcid": "4420", 00:41:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.190 "hdgst": false, 00:41:07.190 "ddgst": false 00:41:07.190 }, 00:41:07.190 "method": "bdev_nvme_attach_controller" 00:41:07.190 },{ 00:41:07.190 "params": { 00:41:07.190 "name": "Nvme2", 00:41:07.190 "trtype": "tcp", 00:41:07.190 "traddr": "10.0.0.2", 00:41:07.190 "adrfam": "ipv4", 00:41:07.190 "trsvcid": "4420", 00:41:07.190 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:07.190 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:07.190 "hdgst": false, 00:41:07.190 "ddgst": false 00:41:07.190 }, 00:41:07.190 "method": "bdev_nvme_attach_controller" 00:41:07.190 }' 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:07.190 13:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:07.190 ... 00:41:07.190 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:07.190 ... 00:41:07.190 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:07.190 ... 00:41:07.190 fio-3.35 00:41:07.190 Starting 24 threads 00:41:19.390 00:41:19.390 filename0: (groupid=0, jobs=1): err= 0: pid=465230: Mon Oct 14 13:52:09 2024 00:41:19.390 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:41:19.390 slat (nsec): min=8174, max=83437, avg=31806.18, stdev=10536.37 00:41:19.390 clat (usec): min=20175, max=51598, avg=33506.55, stdev=1374.47 00:41:19.390 lat (usec): min=20218, max=51615, avg=33538.36, stdev=1374.22 00:41:19.390 clat percentiles (usec): 00:41:19.390 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.390 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.390 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.390 | 99.00th=[36439], 99.50th=[36439], 99.90th=[51643], 99.95th=[51643], 00:41:19.390 | 99.99th=[51643] 00:41:19.390 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=57.91, samples=19 00:41:19.390 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:19.390 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.390 cpu : usr=98.50%, sys=1.07%, ctx=15, majf=0, minf=30 00:41:19.390 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.390 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.390 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.390 filename0: (groupid=0, jobs=1): err= 0: pid=465231: Mon Oct 14 13:52:09 2024 00:41:19.390 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10016msec) 00:41:19.390 slat (nsec): min=5073, max=79591, avg=29486.90, stdev=13218.98 00:41:19.390 clat (usec): min=3594, max=36532, avg=33107.29, stdev=3269.31 00:41:19.390 lat (usec): min=3602, max=36550, avg=33136.77, stdev=3270.09 00:41:19.390 clat percentiles (usec): 00:41:19.390 | 1.00th=[13042], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.390 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:19.390 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.390 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.390 | 99.99th=[36439] 00:41:19.390 bw ( KiB/s): min= 1792, max= 2352, per=4.22%, avg=1916.00, stdev=115.13, samples=20 00:41:19.390 iops : min= 448, max= 588, avg=479.00, stdev=28.78, samples=20 00:41:19.390 lat (msec) : 4=0.29%, 10=0.71%, 20=0.73%, 50=98.27% 00:41:19.390 cpu : usr=98.33%, sys=1.28%, ctx=16, majf=0, minf=28 00:41:19.390 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:19.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.390 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.390 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.390 filename0: (groupid=0, jobs=1): err= 0: pid=465232: Mon Oct 14 13:52:09 2024 00:41:19.390 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:41:19.390 slat (nsec): min=8168, max=85184, avg=30972.86, stdev=9996.84 00:41:19.390 clat (usec): min=20185, max=51630, avg=33516.75, stdev=1371.05 00:41:19.390 lat (usec): min=20218, max=51646, avg=33547.72, stdev=1370.67 00:41:19.390 clat percentiles (usec): 00:41:19.390 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.390 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.390 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.390 | 99.00th=[36439], 99.50th=[36439], 99.90th=[51643], 99.95th=[51643], 00:41:19.390 | 99.99th=[51643] 00:41:19.390 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=57.91, samples=19 00:41:19.390 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:19.390 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.390 cpu : usr=98.42%, sys=1.15%, ctx=11, majf=0, minf=23 00:41:19.390 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename0: (groupid=0, jobs=1): err= 0: pid=465233: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:41:19.391 slat (nsec): min=9795, max=59125, avg=29475.54, stdev=8584.51 00:41:19.391 clat (usec): min=15507, max=36734, avg=33450.38, stdev=1189.29 00:41:19.391 lat (usec): min=15544, max=36786, avg=33479.86, stdev=1189.31 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.391 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:41:19.391 | 99.99th=[36963] 00:41:19.391 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.391 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.391 lat (msec) : 20=0.34%, 50=99.66% 00:41:19.391 cpu : usr=95.94%, sys=2.43%, ctx=206, majf=0, minf=40 00:41:19.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename0: (groupid=0, jobs=1): err= 0: pid=465234: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10003msec) 00:41:19.391 slat (usec): min=8, max=117, avg=44.64, stdev=21.65 00:41:19.391 clat (usec): min=22254, max=74402, avg=33497.80, stdev=2509.27 00:41:19.391 lat (usec): min=22319, max=74436, avg=33542.44, stdev=2506.54 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:19.391 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35914], 99.50th=[36439], 99.90th=[73925], 99.95th=[73925], 00:41:19.391 | 99.99th=[73925] 00:41:19.391 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:41:19.391 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:19.391 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.391 cpu : usr=98.26%, sys=1.32%, ctx=17, majf=0, minf=33 00:41:19.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename0: (groupid=0, jobs=1): err= 0: pid=465235: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:41:19.391 slat (usec): min=4, max=106, avg=33.44, stdev=10.13 00:41:19.391 clat (usec): min=23663, max=47494, avg=33522.84, stdev=1076.62 00:41:19.391 lat (usec): min=23692, max=47506, avg=33556.28, stdev=1075.22 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.391 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35914], 99.50th=[36439], 99.90th=[47449], 99.95th=[47449], 00:41:19.391 | 99.99th=[47449] 00:41:19.391 bw ( KiB/s): min= 1792, max= 2011, per=4.17%, avg=1892.55, stdev=62.88, samples=20 00:41:19.391 iops : min= 448, max= 502, avg=473.10, stdev=15.65, samples=20 00:41:19.391 lat (msec) : 50=100.00% 00:41:19.391 cpu : usr=97.99%, sys=1.39%, ctx=45, majf=0, minf=35 00:41:19.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename0: (groupid=0, jobs=1): err= 0: pid=465236: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10031msec) 00:41:19.391 slat (nsec): min=10014, max=66773, avg=33810.04, stdev=10049.36 00:41:19.391 clat (usec): min=26290, max=36508, avg=33486.03, stdev=638.72 00:41:19.391 lat (usec): min=26308, max=36529, avg=33519.84, stdev=638.32 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.391 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.391 | 99.99th=[36439] 00:41:19.391 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.391 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.391 lat (msec) : 50=100.00% 00:41:19.391 cpu : usr=98.42%, sys=1.18%, ctx=18, majf=0, minf=26 00:41:19.391 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename0: (groupid=0, jobs=1): err= 0: pid=465237: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:41:19.391 slat (usec): min=9, max=118, avg=40.72, stdev=19.37 00:41:19.391 clat (usec): min=21813, max=39924, avg=33376.17, stdev=983.14 00:41:19.391 lat (usec): min=21834, max=39951, avg=33416.89, stdev=980.03 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[30278], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:19.391 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.391 | 99.99th=[40109] 00:41:19.391 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.391 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.391 lat (msec) : 50=100.00% 00:41:19.391 cpu : usr=98.05%, sys=1.44%, ctx=52, majf=0, minf=36 00:41:19.391 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename1: (groupid=0, jobs=1): err= 0: pid=465238: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:41:19.391 slat (usec): min=9, max=124, avg=51.23, stdev=25.39 00:41:19.391 clat (usec): min=13661, max=61576, avg=33370.67, stdev=2079.04 00:41:19.391 lat (usec): min=13692, max=61591, avg=33421.89, stdev=2074.56 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:19.391 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:19.391 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.391 | 99.00th=[35914], 99.50th=[36439], 99.90th=[61604], 99.95th=[61604], 00:41:19.391 | 99.99th=[61604] 00:41:19.391 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.47, stdev=71.42, samples=19 00:41:19.391 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:41:19.391 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:41:19.391 cpu : usr=97.67%, sys=1.55%, ctx=453, majf=0, minf=24 00:41:19.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.391 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.391 filename1: (groupid=0, jobs=1): err= 0: pid=465239: Mon Oct 14 13:52:09 2024 00:41:19.391 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:41:19.391 slat (usec): min=8, max=113, avg=51.41, stdev=25.28 00:41:19.391 clat (usec): min=20157, max=52181, avg=33359.59, stdev=1450.50 00:41:19.391 lat (usec): min=20193, max=52210, avg=33411.01, stdev=1444.68 00:41:19.391 clat percentiles (usec): 00:41:19.391 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:19.392 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.392 | 99.00th=[36439], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:41:19.392 | 99.99th=[52167] 00:41:19.392 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=57.91, samples=19 00:41:19.392 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:41:19.392 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.392 cpu : usr=98.39%, sys=1.19%, ctx=17, majf=0, minf=25 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465240: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:41:19.392 slat (nsec): min=8355, max=95004, avg=21832.07, stdev=12382.76 00:41:19.392 clat (usec): min=18955, max=36767, avg=33529.91, stdev=1181.44 00:41:19.392 lat (usec): min=19013, max=36788, avg=33551.74, stdev=1179.91 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[31065], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:41:19.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.392 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:41:19.392 | 99.99th=[36963] 00:41:19.392 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.392 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.392 lat (msec) : 20=0.34%, 50=99.66% 00:41:19.392 cpu : usr=98.46%, sys=1.16%, ctx=16, majf=0, minf=73 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465241: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10002msec) 00:41:19.392 slat (nsec): min=8383, max=68643, avg=31530.35, stdev=8545.45 00:41:19.392 clat (usec): min=23733, max=73851, avg=33615.74, stdev=2443.84 00:41:19.392 lat (usec): min=23756, max=73884, avg=33647.27, stdev=2443.71 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.392 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.392 | 99.00th=[35914], 99.50th=[36439], 99.90th=[73925], 99.95th=[73925], 00:41:19.392 | 99.99th=[73925] 00:41:19.392 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:41:19.392 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:41:19.392 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.392 cpu : usr=97.80%, sys=1.44%, ctx=98, majf=0, minf=36 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465242: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:41:19.392 slat (nsec): min=8311, max=75748, avg=30715.86, stdev=11078.90 00:41:19.392 clat (usec): min=18976, max=42613, avg=33438.72, stdev=1208.89 00:41:19.392 lat (usec): min=19035, max=42638, avg=33469.43, stdev=1209.40 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[31589], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.392 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:41:19.392 | 99.99th=[42730] 00:41:19.392 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.392 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.392 lat (msec) : 20=0.34%, 50=99.66% 00:41:19.392 cpu : usr=98.14%, sys=1.31%, ctx=55, majf=0, minf=24 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465243: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:41:19.392 slat (usec): min=8, max=110, avg=26.71, stdev=22.78 00:41:19.392 clat (usec): min=19059, max=36854, avg=33478.37, stdev=1225.27 00:41:19.392 lat (usec): min=19086, max=36873, avg=33505.08, stdev=1220.46 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[31589], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:19.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:41:19.392 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:41:19.392 | 99.99th=[36963] 00:41:19.392 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.392 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.392 lat (msec) : 20=0.34%, 50=99.66% 00:41:19.392 cpu : usr=98.27%, sys=1.28%, ctx=24, majf=0, minf=41 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465244: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:41:19.392 slat (usec): min=9, max=107, avg=35.56, stdev=16.63 00:41:19.392 clat (usec): min=13665, max=62074, avg=33496.06, stdev=2067.12 00:41:19.392 lat (usec): min=13686, max=62105, avg=33531.62, stdev=2066.51 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:19.392 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.392 | 99.00th=[35390], 99.50th=[36439], 99.90th=[62129], 99.95th=[62129], 00:41:19.392 | 99.99th=[62129] 00:41:19.392 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:41:19.392 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:41:19.392 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:41:19.392 cpu : usr=97.89%, sys=1.56%, ctx=51, majf=0, minf=39 00:41:19.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.392 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.392 filename1: (groupid=0, jobs=1): err= 0: pid=465245: Mon Oct 14 13:52:09 2024 00:41:19.392 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10001msec) 00:41:19.392 slat (usec): min=7, max=110, avg=25.66, stdev=21.79 00:41:19.392 clat (usec): min=3078, max=36713, avg=33123.66, stdev=3535.09 00:41:19.392 lat (usec): min=3086, max=36733, avg=33149.32, stdev=3535.29 00:41:19.392 clat percentiles (usec): 00:41:19.392 | 1.00th=[ 5014], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:41:19.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.393 | 99.99th=[36963] 00:41:19.393 bw ( KiB/s): min= 1792, max= 2308, per=4.23%, avg=1920.21, stdev=105.33, samples=19 00:41:19.393 iops : min= 448, max= 577, avg=480.05, stdev=26.33, samples=19 00:41:19.393 lat (msec) : 4=0.67%, 10=0.67%, 20=0.04%, 50=98.62% 00:41:19.393 cpu : usr=98.33%, sys=1.28%, ctx=13, majf=0, minf=58 00:41:19.393 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465246: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10031msec) 00:41:19.393 slat (nsec): min=8847, max=94887, avg=30238.90, stdev=10480.47 00:41:19.393 clat (usec): min=26297, max=36488, avg=33527.21, stdev=651.02 00:41:19.393 lat (usec): min=26324, max=36511, avg=33557.45, stdev=648.96 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.393 | 99.99th=[36439] 00:41:19.393 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.393 lat (msec) : 50=100.00% 00:41:19.393 cpu : usr=97.32%, sys=1.76%, ctx=105, majf=0, minf=43 00:41:19.393 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465247: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10019msec) 00:41:19.393 slat (usec): min=11, max=168, avg=33.31, stdev=11.41 00:41:19.393 clat (usec): min=21708, max=36667, avg=33438.54, stdev=939.74 00:41:19.393 lat (usec): min=21737, max=36709, avg=33471.84, stdev=940.56 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[30540], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.393 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:41:19.393 | 99.99th=[36439] 00:41:19.393 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.393 lat (msec) : 50=100.00% 00:41:19.393 cpu : usr=98.06%, sys=1.35%, ctx=69, majf=0, minf=25 00:41:19.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465248: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:41:19.393 slat (usec): min=8, max=120, avg=51.58, stdev=23.49 00:41:19.393 clat (usec): min=19038, max=36812, avg=33247.21, stdev=1214.73 00:41:19.393 lat (usec): min=19095, max=36863, avg=33298.79, stdev=1211.05 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:19.393 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36963], 00:41:19.393 | 99.99th=[36963] 00:41:19.393 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.393 lat (msec) : 20=0.34%, 50=99.66% 00:41:19.393 cpu : usr=96.60%, sys=2.09%, ctx=166, majf=0, minf=38 00:41:19.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465249: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10011msec) 00:41:19.393 slat (usec): min=8, max=121, avg=52.44, stdev=25.94 00:41:19.393 clat (usec): min=13756, max=80595, avg=33385.14, stdev=2260.39 00:41:19.393 lat (usec): min=13780, max=80629, avg=33437.58, stdev=2257.17 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[30278], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:19.393 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[36439], 99.50th=[38011], 99.90th=[61604], 99.95th=[61604], 00:41:19.393 | 99.99th=[80217] 00:41:19.393 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1885.63, stdev=71.09, samples=19 00:41:19.393 iops : min= 416, max= 480, avg=471.37, stdev=17.90, samples=19 00:41:19.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:41:19.393 cpu : usr=98.52%, sys=1.06%, ctx=16, majf=0, minf=38 00:41:19.393 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465250: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10013msec) 00:41:19.393 slat (nsec): min=4030, max=81313, avg=32798.29, stdev=10969.39 00:41:19.393 clat (usec): min=23661, max=61025, avg=33540.60, stdev=1297.68 00:41:19.393 lat (usec): min=23672, max=61040, avg=33573.40, stdev=1296.59 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:19.393 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35914], 99.50th=[36439], 99.90th=[50070], 99.95th=[50070], 00:41:19.393 | 99.99th=[61080] 00:41:19.393 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:41:19.393 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:41:19.393 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.393 cpu : usr=98.38%, sys=1.24%, ctx=19, majf=0, minf=25 00:41:19.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465251: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:41:19.393 slat (usec): min=8, max=122, avg=43.18, stdev=22.77 00:41:19.393 clat (usec): min=13563, max=61509, avg=33427.10, stdev=2059.29 00:41:19.393 lat (usec): min=13582, max=61543, avg=33470.28, stdev=2057.36 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:41:19.393 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35390], 99.50th=[35914], 99.90th=[61604], 99.95th=[61604], 00:41:19.393 | 99.99th=[61604] 00:41:19.393 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.47, stdev=71.42, samples=19 00:41:19.393 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:41:19.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:41:19.393 cpu : usr=97.74%, sys=1.35%, ctx=156, majf=0, minf=37 00:41:19.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.393 filename2: (groupid=0, jobs=1): err= 0: pid=465252: Mon Oct 14 13:52:09 2024 00:41:19.393 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:41:19.393 slat (nsec): min=8456, max=69022, avg=28462.10, stdev=12543.25 00:41:19.393 clat (usec): min=13916, max=36765, avg=33496.45, stdev=978.13 00:41:19.393 lat (usec): min=13944, max=36792, avg=33524.92, stdev=978.67 00:41:19.393 clat percentiles (usec): 00:41:19.393 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:19.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:41:19.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.393 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:41:19.393 | 99.99th=[36963] 00:41:19.393 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:41:19.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:41:19.393 lat (msec) : 20=0.04%, 50=99.96% 00:41:19.393 cpu : usr=97.26%, sys=1.65%, ctx=289, majf=0, minf=39 00:41:19.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.393 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.394 filename2: (groupid=0, jobs=1): err= 0: pid=465253: Mon Oct 14 13:52:09 2024 00:41:19.394 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:41:19.394 slat (usec): min=8, max=101, avg=36.75, stdev=17.52 00:41:19.394 clat (usec): min=20169, max=54220, avg=33485.40, stdev=1502.56 00:41:19.394 lat (usec): min=20192, max=54260, avg=33522.15, stdev=1501.09 00:41:19.394 clat percentiles (usec): 00:41:19.394 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:19.394 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:19.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:41:19.394 | 99.00th=[36439], 99.50th=[36439], 99.90th=[54264], 99.95th=[54264], 00:41:19.394 | 99.99th=[54264] 00:41:19.394 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:41:19.394 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:41:19.394 lat (msec) : 50=99.66%, 100=0.34% 00:41:19.394 cpu : usr=97.83%, sys=1.51%, ctx=54, majf=0, minf=26 00:41:19.394 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.394 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:19.394 00:41:19.394 Run status group 0 (all jobs): 00:41:19.394 READ: bw=44.4MiB/s (46.5MB/s), 1887KiB/s-1920KiB/s (1933kB/s-1966kB/s), io=445MiB (467MB), run=10001-10031msec 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 bdev_null0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 [2024-10-14 13:52:10.155518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 bdev_null1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:19.394 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:19.394 { 00:41:19.394 "params": { 00:41:19.395 "name": "Nvme$subsystem", 00:41:19.395 "trtype": "$TEST_TRANSPORT", 00:41:19.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.395 "adrfam": "ipv4", 00:41:19.395 "trsvcid": "$NVMF_PORT", 00:41:19.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.395 "hdgst": ${hdgst:-false}, 00:41:19.395 "ddgst": ${ddgst:-false} 00:41:19.395 }, 00:41:19.395 "method": "bdev_nvme_attach_controller" 00:41:19.395 } 00:41:19.395 EOF 00:41:19.395 )") 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:19.395 { 00:41:19.395 "params": { 00:41:19.395 "name": "Nvme$subsystem", 00:41:19.395 "trtype": "$TEST_TRANSPORT", 00:41:19.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.395 "adrfam": "ipv4", 00:41:19.395 "trsvcid": "$NVMF_PORT", 00:41:19.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.395 "hdgst": ${hdgst:-false}, 00:41:19.395 "ddgst": ${ddgst:-false} 00:41:19.395 }, 00:41:19.395 "method": "bdev_nvme_attach_controller" 00:41:19.395 } 00:41:19.395 EOF 00:41:19.395 )") 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:19.395 "params": { 00:41:19.395 "name": "Nvme0", 00:41:19.395 "trtype": "tcp", 00:41:19.395 "traddr": "10.0.0.2", 00:41:19.395 "adrfam": "ipv4", 00:41:19.395 "trsvcid": "4420", 00:41:19.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.395 "hdgst": false, 00:41:19.395 "ddgst": false 00:41:19.395 }, 00:41:19.395 "method": "bdev_nvme_attach_controller" 00:41:19.395 },{ 00:41:19.395 "params": { 00:41:19.395 "name": "Nvme1", 00:41:19.395 "trtype": "tcp", 00:41:19.395 "traddr": "10.0.0.2", 00:41:19.395 "adrfam": "ipv4", 00:41:19.395 "trsvcid": "4420", 00:41:19.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:19.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:19.395 "hdgst": false, 00:41:19.395 "ddgst": false 00:41:19.395 }, 00:41:19.395 "method": "bdev_nvme_attach_controller" 00:41:19.395 }' 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:19.395 13:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.395 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:19.395 ... 00:41:19.395 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:19.395 ... 00:41:19.395 fio-3.35 00:41:19.395 Starting 4 threads 00:41:24.657 00:41:24.658 filename0: (groupid=0, jobs=1): err= 0: pid=466512: Mon Oct 14 13:52:16 2024 00:41:24.658 read: IOPS=1906, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5004msec) 00:41:24.658 slat (nsec): min=6801, max=66325, avg=13156.29, stdev=7002.35 00:41:24.658 clat (usec): min=774, max=39150, avg=4152.57, stdev=1152.25 00:41:24.658 lat (usec): min=792, max=39179, avg=4165.73, stdev=1152.34 00:41:24.658 clat percentiles (usec): 00:41:24.658 | 1.00th=[ 2278], 5.00th=[ 3294], 10.00th=[ 3523], 20.00th=[ 3818], 00:41:24.658 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:24.658 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4883], 00:41:24.658 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7635], 99.95th=[39060], 00:41:24.658 | 99.99th=[39060] 00:41:24.658 bw ( KiB/s): min=13856, max=15856, per=25.60%, avg=15248.00, stdev=581.65, samples=10 00:41:24.658 iops : min= 1732, max= 1982, avg=1906.00, stdev=72.71, samples=10 00:41:24.658 lat (usec) : 1000=0.03% 00:41:24.658 lat (msec) : 2=0.48%, 4=29.95%, 10=69.45%, 50=0.08% 00:41:24.658 cpu : usr=94.96%, sys=4.54%, ctx=6, majf=0, minf=9 00:41:24.658 IO depths : 1=0.4%, 2=11.8%, 4=59.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 issued rwts: total=9538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:24.658 filename0: (groupid=0, jobs=1): err= 0: pid=466513: Mon Oct 14 13:52:16 2024 00:41:24.658 read: IOPS=1823, BW=14.2MiB/s (14.9MB/s)(71.3MiB/5004msec) 00:41:24.658 slat (nsec): min=6614, max=69118, avg=16195.89, stdev=8630.44 00:41:24.658 clat (usec): min=756, max=38924, avg=4328.40, stdev=1239.37 00:41:24.658 lat (usec): min=770, max=38945, avg=4344.59, stdev=1239.24 00:41:24.658 clat percentiles (usec): 00:41:24.658 | 1.00th=[ 2376], 5.00th=[ 3425], 10.00th=[ 3752], 20.00th=[ 4015], 00:41:24.658 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:24.658 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5735], 00:41:24.658 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8029], 99.95th=[39060], 00:41:24.658 | 99.99th=[39060] 00:41:24.658 bw ( KiB/s): min=14048, max=14992, per=24.49%, avg=14591.60, stdev=361.73, samples=10 00:41:24.658 iops : min= 1756, max= 1874, avg=1823.90, stdev=45.28, samples=10 00:41:24.658 lat (usec) : 1000=0.04% 00:41:24.658 lat (msec) : 2=0.64%, 4=18.89%, 10=80.34%, 50=0.09% 00:41:24.658 cpu : usr=94.56%, sys=4.90%, ctx=8, majf=0, minf=10 00:41:24.658 IO depths : 1=0.2%, 2=14.8%, 4=57.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 issued rwts: total=9126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:24.658 filename1: (groupid=0, jobs=1): err= 0: pid=466514: Mon Oct 14 13:52:16 2024 00:41:24.658 read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5003msec) 00:41:24.658 slat (nsec): min=6632, max=69252, avg=15209.42, stdev=8332.82 00:41:24.658 clat (usec): min=1088, max=39186, avg=4213.40, stdev=1171.72 00:41:24.658 lat (usec): min=1101, max=39204, avg=4228.61, stdev=1171.72 00:41:24.658 clat percentiles (usec): 00:41:24.658 | 1.00th=[ 2442], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3884], 00:41:24.658 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:41:24.658 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5145], 00:41:24.658 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[39060], 00:41:24.658 | 99.99th=[39060] 00:41:24.658 bw ( KiB/s): min=13840, max=15664, per=25.17%, avg=14995.20, stdev=527.32, samples=10 00:41:24.658 iops : min= 1730, max= 1958, avg=1874.40, stdev=65.91, samples=10 00:41:24.658 lat (msec) : 2=0.51%, 4=25.22%, 10=74.18%, 50=0.09% 00:41:24.658 cpu : usr=95.54%, sys=3.94%, ctx=10, majf=0, minf=9 00:41:24.658 IO depths : 1=0.3%, 2=16.2%, 4=56.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 issued rwts: total=9380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:24.658 filename1: (groupid=0, jobs=1): err= 0: pid=466515: Mon Oct 14 13:52:16 2024 00:41:24.658 read: IOPS=1842, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5002msec) 00:41:24.658 slat (nsec): min=6445, max=65744, avg=17598.63, stdev=8964.94 00:41:24.658 clat (usec): min=908, max=38981, avg=4275.26, stdev=1219.10 00:41:24.658 lat (usec): min=921, max=39001, avg=4292.86, stdev=1219.13 00:41:24.658 clat percentiles (usec): 00:41:24.658 | 1.00th=[ 2245], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3982], 00:41:24.658 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:24.658 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5538], 00:41:24.658 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7832], 99.95th=[39060], 00:41:24.658 | 99.99th=[39060] 00:41:24.658 bw ( KiB/s): min=13696, max=15360, per=24.74%, avg=14738.80, stdev=436.82, samples=10 00:41:24.658 iops : min= 1712, max= 1920, avg=1842.30, stdev=54.64, samples=10 00:41:24.658 lat (usec) : 1000=0.09% 00:41:24.658 lat (msec) : 2=0.74%, 4=20.31%, 10=78.78%, 50=0.09% 00:41:24.658 cpu : usr=92.90%, sys=5.54%, ctx=193, majf=0, minf=9 00:41:24.658 IO depths : 1=0.3%, 2=16.7%, 4=56.1%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:24.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:24.658 issued rwts: total=9218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:24.658 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:24.658 00:41:24.658 Run status group 0 (all jobs): 00:41:24.658 READ: bw=58.2MiB/s (61.0MB/s), 14.2MiB/s-14.9MiB/s (14.9MB/s-15.6MB/s), io=291MiB (305MB), run=5002-5004msec 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 00:41:24.658 real 0m24.372s 00:41:24.658 user 4m33.017s 00:41:24.658 sys 0m6.206s 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 ************************************ 00:41:24.658 END TEST fio_dif_rand_params 00:41:24.658 ************************************ 00:41:24.658 13:52:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:24.658 13:52:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:24.658 13:52:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 ************************************ 00:41:24.658 START TEST fio_dif_digest 00:41:24.658 ************************************ 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 bdev_null0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:24.658 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:24.659 [2024-10-14 13:52:16.506265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:24.659 { 00:41:24.659 "params": { 00:41:24.659 "name": "Nvme$subsystem", 00:41:24.659 "trtype": "$TEST_TRANSPORT", 00:41:24.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:24.659 "adrfam": "ipv4", 00:41:24.659 "trsvcid": "$NVMF_PORT", 00:41:24.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:24.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:24.659 "hdgst": ${hdgst:-false}, 00:41:24.659 "ddgst": ${ddgst:-false} 00:41:24.659 }, 00:41:24.659 "method": "bdev_nvme_attach_controller" 00:41:24.659 } 00:41:24.659 EOF 00:41:24.659 )") 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:24.659 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:24.917 "params": { 00:41:24.917 "name": "Nvme0", 00:41:24.917 "trtype": "tcp", 00:41:24.917 "traddr": "10.0.0.2", 00:41:24.917 "adrfam": "ipv4", 00:41:24.917 "trsvcid": "4420", 00:41:24.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:24.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:24.917 "hdgst": true, 00:41:24.917 "ddgst": true 00:41:24.917 }, 00:41:24.917 "method": "bdev_nvme_attach_controller" 00:41:24.917 }' 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:24.917 13:52:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:25.174 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:25.174 ... 00:41:25.174 fio-3.35 00:41:25.174 Starting 3 threads 00:41:37.368 00:41:37.368 filename0: (groupid=0, jobs=1): err= 0: pid=467385: Mon Oct 14 13:52:27 2024 00:41:37.368 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(239MiB/10046msec) 00:41:37.368 slat (nsec): min=8354, max=69289, avg=16835.31, stdev=4136.49 00:41:37.368 clat (usec): min=12045, max=56090, avg=15751.94, stdev=2136.75 00:41:37.368 lat (usec): min=12090, max=56110, avg=15768.77, stdev=2136.74 00:41:37.368 clat percentiles (usec): 00:41:37.368 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14484], 20.00th=[14877], 00:41:37.368 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:41:37.368 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:41:37.368 | 99.00th=[18220], 99.50th=[18482], 99.90th=[55313], 99.95th=[55837], 00:41:37.368 | 99.99th=[55837] 00:41:37.368 bw ( KiB/s): min=22272, max=24832, per=31.57%, avg=24396.80, stdev=593.73, samples=20 00:41:37.368 iops : min= 174, max= 194, avg=190.60, stdev= 4.64, samples=20 00:41:37.368 lat (msec) : 20=99.74%, 50=0.05%, 100=0.21% 00:41:37.368 cpu : usr=95.68%, sys=3.82%, ctx=16, majf=0, minf=166 00:41:37.368 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.368 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.368 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.368 filename0: (groupid=0, jobs=1): err= 0: pid=467386: Mon Oct 14 13:52:27 2024 00:41:37.368 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10047msec) 00:41:37.368 slat (nsec): min=4613, max=93742, avg=20870.26, stdev=4697.44 00:41:37.369 clat (usec): min=9530, max=51015, avg=14543.35, stdev=1405.47 00:41:37.369 lat (usec): min=9565, max=51036, avg=14564.22, stdev=1405.49 00:41:37.369 clat percentiles (usec): 00:41:37.369 | 1.00th=[11600], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:41:37.369 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:41:37.369 | 70.00th=[15139], 80.00th=[15401], 90.00th=[16057], 95.00th=[16450], 00:41:37.369 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:41:37.369 | 99.99th=[51119] 00:41:37.369 bw ( KiB/s): min=25600, max=27392, per=34.16%, avg=26393.60, stdev=469.11, samples=20 00:41:37.369 iops : min= 200, max= 214, avg=206.20, stdev= 3.66, samples=20 00:41:37.369 lat (msec) : 10=0.05%, 20=99.90%, 100=0.05% 00:41:37.369 cpu : usr=95.68%, sys=3.82%, ctx=15, majf=0, minf=151 00:41:37.369 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.369 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.369 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.369 filename0: (groupid=0, jobs=1): err= 0: pid=467387: Mon Oct 14 13:52:27 2024 00:41:37.369 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(262MiB/10047msec) 00:41:37.369 slat (nsec): min=4473, max=73235, avg=16971.41, stdev=4487.45 00:41:37.369 clat (usec): min=9492, max=52488, avg=14354.01, stdev=1523.36 00:41:37.369 lat (usec): min=9512, max=52502, avg=14370.98, stdev=1523.27 00:41:37.369 clat percentiles (usec): 00:41:37.369 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13042], 20.00th=[13566], 00:41:37.369 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:41:37.369 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:41:37.369 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20841], 99.95th=[47973], 00:41:37.369 | 99.99th=[52691] 00:41:37.369 bw ( KiB/s): min=25856, max=28160, per=34.64%, avg=26767.40, stdev=575.30, samples=20 00:41:37.369 iops : min= 202, max= 220, avg=209.10, stdev= 4.52, samples=20 00:41:37.369 lat (msec) : 10=0.19%, 20=99.67%, 50=0.10%, 100=0.05% 00:41:37.369 cpu : usr=95.09%, sys=4.42%, ctx=38, majf=0, minf=199 00:41:37.369 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.369 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.369 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.369 00:41:37.369 Run status group 0 (all jobs): 00:41:37.369 READ: bw=75.5MiB/s (79.1MB/s), 23.7MiB/s-26.1MiB/s (24.9MB/s-27.3MB/s), io=758MiB (795MB), run=10046-10047msec 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.369 00:41:37.369 real 0m11.229s 00:41:37.369 user 0m29.929s 00:41:37.369 sys 0m1.527s 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:37.369 13:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.369 ************************************ 00:41:37.369 END TEST fio_dif_digest 00:41:37.369 ************************************ 00:41:37.369 13:52:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:37.369 13:52:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:37.369 rmmod nvme_tcp 00:41:37.369 rmmod nvme_fabrics 00:41:37.369 rmmod nvme_keyring 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 461340 ']' 00:41:37.369 13:52:27 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 461340 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 461340 ']' 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 461340 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 461340 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 461340' 00:41:37.369 killing process with pid 461340 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@969 -- # kill 461340 00:41:37.369 13:52:27 nvmf_dif -- common/autotest_common.sh@974 -- # wait 461340 00:41:37.369 13:52:28 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:41:37.369 13:52:28 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:37.369 Waiting for block devices as requested 00:41:37.627 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:37.627 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:37.627 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:37.885 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:37.885 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:37.885 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:38.146 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:38.146 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:38.146 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:38.146 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:38.406 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:38.406 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:38.406 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:38.406 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:38.665 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:38.665 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:38.665 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:38.924 13:52:30 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.924 13:52:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:38.924 13:52:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:40.828 13:52:32 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:40.828 00:41:40.828 real 1m7.291s 00:41:40.828 user 6m30.630s 00:41:40.828 sys 0m17.392s 00:41:40.828 13:52:32 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:40.828 13:52:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:40.828 ************************************ 00:41:40.828 END TEST nvmf_dif 00:41:40.828 ************************************ 00:41:40.828 13:52:32 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:40.828 13:52:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:40.828 13:52:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:40.828 13:52:32 -- common/autotest_common.sh@10 -- # set +x 00:41:41.087 ************************************ 00:41:41.087 START TEST nvmf_abort_qd_sizes 00:41:41.087 ************************************ 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:41.087 * Looking for test storage... 00:41:41.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:41.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.087 --rc genhtml_branch_coverage=1 00:41:41.087 --rc genhtml_function_coverage=1 00:41:41.087 --rc genhtml_legend=1 00:41:41.087 --rc geninfo_all_blocks=1 00:41:41.087 --rc geninfo_unexecuted_blocks=1 00:41:41.087 00:41:41.087 ' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:41.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.087 --rc genhtml_branch_coverage=1 00:41:41.087 --rc genhtml_function_coverage=1 00:41:41.087 --rc genhtml_legend=1 00:41:41.087 --rc geninfo_all_blocks=1 00:41:41.087 --rc geninfo_unexecuted_blocks=1 00:41:41.087 00:41:41.087 ' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:41.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.087 --rc genhtml_branch_coverage=1 00:41:41.087 --rc genhtml_function_coverage=1 00:41:41.087 --rc genhtml_legend=1 00:41:41.087 --rc geninfo_all_blocks=1 00:41:41.087 --rc geninfo_unexecuted_blocks=1 00:41:41.087 00:41:41.087 ' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:41.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.087 --rc genhtml_branch_coverage=1 00:41:41.087 --rc genhtml_function_coverage=1 00:41:41.087 --rc genhtml_legend=1 00:41:41.087 --rc geninfo_all_blocks=1 00:41:41.087 --rc geninfo_unexecuted_blocks=1 00:41:41.087 00:41:41.087 ' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.087 13:52:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:41.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:41.088 13:52:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:43.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:43.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:43.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:43.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:43.619 13:52:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:43.619 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:43.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:43.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:41:43.620 00:41:43.620 --- 10.0.0.2 ping statistics --- 00:41:43.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.620 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:41:43.620 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:43.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:43.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:41:43.620 00:41:43.620 --- 10.0.0.1 ping statistics --- 00:41:43.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.620 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:41:43.620 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:43.620 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:41:43.620 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:41:43.620 13:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:44.553 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:44.553 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:44.553 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:45.490 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=472219 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 472219 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 472219 ']' 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:45.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:45.765 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:45.765 [2024-10-14 13:52:37.502086] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:41:45.765 [2024-10-14 13:52:37.502176] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:45.765 [2024-10-14 13:52:37.570522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:46.023 [2024-10-14 13:52:37.624273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:46.023 [2024-10-14 13:52:37.624329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:46.023 [2024-10-14 13:52:37.624343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:46.023 [2024-10-14 13:52:37.624358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:46.023 [2024-10-14 13:52:37.624367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:46.023 [2024-10-14 13:52:37.625895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.023 [2024-10-14 13:52:37.625959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:46.023 [2024-10-14 13:52:37.626024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:46.023 [2024-10-14 13:52:37.626027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:46.023 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:46.024 13:52:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:46.024 ************************************ 00:41:46.024 START TEST spdk_target_abort 00:41:46.024 ************************************ 00:41:46.024 13:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:41:46.024 13:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:46.024 13:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:46.024 13:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:46.024 13:52:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.301 spdk_targetn1 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.301 [2024-10-14 13:52:40.636861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:49.301 [2024-10-14 13:52:40.679242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:49.301 13:52:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:52.579 Initializing NVMe Controllers 00:41:52.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:52.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:52.579 Initialization complete. Launching workers. 00:41:52.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11030, failed: 0 00:41:52.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1193, failed to submit 9837 00:41:52.580 success 704, unsuccessful 489, failed 0 00:41:52.580 13:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:52.580 13:52:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:55.861 Initializing NVMe Controllers 00:41:55.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:55.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:55.861 Initialization complete. Launching workers. 00:41:55.861 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8697, failed: 0 00:41:55.861 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 7477 00:41:55.861 success 339, unsuccessful 881, failed 0 00:41:55.861 13:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:55.862 13:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:59.139 Initializing NVMe Controllers 00:41:59.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:59.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:59.139 Initialization complete. Launching workers. 00:41:59.139 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29549, failed: 0 00:41:59.139 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2671, failed to submit 26878 00:41:59.139 success 453, unsuccessful 2218, failed 0 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:59.139 13:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 472219 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 472219 ']' 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 472219 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 472219 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 472219' 00:42:00.070 killing process with pid 472219 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 472219 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 472219 00:42:00.070 00:42:00.070 real 0m14.046s 00:42:00.070 user 0m53.174s 00:42:00.070 sys 0m2.518s 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:00.070 ************************************ 00:42:00.070 END TEST spdk_target_abort 00:42:00.070 ************************************ 00:42:00.070 13:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:00.070 13:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:00.070 13:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.070 13:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:00.070 ************************************ 00:42:00.070 START TEST kernel_target_abort 00:42:00.070 ************************************ 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:00.070 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:00.328 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:00.328 13:52:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:01.265 Waiting for block devices as requested 00:42:01.265 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:01.524 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:01.524 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:01.784 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:01.784 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:01.784 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:01.784 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:02.042 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:02.042 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:02.042 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:02.042 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:02.300 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:02.300 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:02.300 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:02.559 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:02.559 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:02.559 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:02.817 No valid GPT data, bailing 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:02.817 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:02.818 00:42:02.818 Discovery Log Number of Records 2, Generation counter 2 00:42:02.818 =====Discovery Log Entry 0====== 00:42:02.818 trtype: tcp 00:42:02.818 adrfam: ipv4 00:42:02.818 subtype: current discovery subsystem 00:42:02.818 treq: not specified, sq flow control disable supported 00:42:02.818 portid: 1 00:42:02.818 trsvcid: 4420 00:42:02.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:02.818 traddr: 10.0.0.1 00:42:02.818 eflags: none 00:42:02.818 sectype: none 00:42:02.818 =====Discovery Log Entry 1====== 00:42:02.818 trtype: tcp 00:42:02.818 adrfam: ipv4 00:42:02.818 subtype: nvme subsystem 00:42:02.818 treq: not specified, sq flow control disable supported 00:42:02.818 portid: 1 00:42:02.818 trsvcid: 4420 00:42:02.818 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:02.818 traddr: 10.0.0.1 00:42:02.818 eflags: none 00:42:02.818 sectype: none 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:02.818 13:52:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:06.097 Initializing NVMe Controllers 00:42:06.097 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:06.097 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:06.097 Initialization complete. Launching workers. 00:42:06.097 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56745, failed: 0 00:42:06.097 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56745, failed to submit 0 00:42:06.097 success 0, unsuccessful 56745, failed 0 00:42:06.097 13:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:06.097 13:52:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:09.386 Initializing NVMe Controllers 00:42:09.386 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:09.386 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:09.386 Initialization complete. Launching workers. 00:42:09.386 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99336, failed: 0 00:42:09.386 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25038, failed to submit 74298 00:42:09.386 success 0, unsuccessful 25038, failed 0 00:42:09.386 13:53:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:09.386 13:53:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:12.665 Initializing NVMe Controllers 00:42:12.665 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:12.665 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:12.665 Initialization complete. Launching workers. 00:42:12.665 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96029, failed: 0 00:42:12.665 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24010, failed to submit 72019 00:42:12.665 success 0, unsuccessful 24010, failed 0 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:12.665 13:53:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:12.665 13:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:13.603 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:13.603 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:13.603 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:14.542 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:14.801 00:42:14.801 real 0m14.511s 00:42:14.801 user 0m6.662s 00:42:14.801 sys 0m3.288s 00:42:14.801 13:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:14.801 13:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:14.801 ************************************ 00:42:14.801 END TEST kernel_target_abort 00:42:14.801 ************************************ 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:14.801 rmmod nvme_tcp 00:42:14.801 rmmod nvme_fabrics 00:42:14.801 rmmod nvme_keyring 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 472219 ']' 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 472219 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 472219 ']' 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 472219 00:42:14.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (472219) - No such process 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 472219 is not found' 00:42:14.801 Process with pid 472219 is not found 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:14.801 13:53:06 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:15.741 Waiting for block devices as requested 00:42:16.000 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:16.000 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:16.259 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:16.259 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:16.259 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:16.259 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:16.516 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:16.516 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:16.516 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:16.516 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:16.775 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:16.775 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:16.775 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:16.775 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:17.033 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:17.033 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:17.033 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:17.291 13:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.195 13:53:10 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:19.195 00:42:19.195 real 0m38.310s 00:42:19.195 user 1m2.171s 00:42:19.196 sys 0m9.383s 00:42:19.196 13:53:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:19.196 13:53:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:19.196 ************************************ 00:42:19.196 END TEST nvmf_abort_qd_sizes 00:42:19.196 ************************************ 00:42:19.196 13:53:11 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:19.196 13:53:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:19.196 13:53:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:19.196 13:53:11 -- common/autotest_common.sh@10 -- # set +x 00:42:19.454 ************************************ 00:42:19.454 START TEST keyring_file 00:42:19.454 ************************************ 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:19.454 * Looking for test storage... 00:42:19.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.454 --rc genhtml_branch_coverage=1 00:42:19.454 --rc genhtml_function_coverage=1 00:42:19.454 --rc genhtml_legend=1 00:42:19.454 --rc geninfo_all_blocks=1 00:42:19.454 --rc geninfo_unexecuted_blocks=1 00:42:19.454 00:42:19.454 ' 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.454 --rc genhtml_branch_coverage=1 00:42:19.454 --rc genhtml_function_coverage=1 00:42:19.454 --rc genhtml_legend=1 00:42:19.454 --rc geninfo_all_blocks=1 00:42:19.454 --rc geninfo_unexecuted_blocks=1 00:42:19.454 00:42:19.454 ' 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.454 --rc genhtml_branch_coverage=1 00:42:19.454 --rc genhtml_function_coverage=1 00:42:19.454 --rc genhtml_legend=1 00:42:19.454 --rc geninfo_all_blocks=1 00:42:19.454 --rc geninfo_unexecuted_blocks=1 00:42:19.454 00:42:19.454 ' 00:42:19.454 13:53:11 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.454 --rc genhtml_branch_coverage=1 00:42:19.454 --rc genhtml_function_coverage=1 00:42:19.454 --rc genhtml_legend=1 00:42:19.454 --rc geninfo_all_blocks=1 00:42:19.454 --rc geninfo_unexecuted_blocks=1 00:42:19.454 00:42:19.454 ' 00:42:19.454 13:53:11 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:19.454 13:53:11 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:19.454 13:53:11 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:19.454 13:53:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.454 13:53:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.454 13:53:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.454 13:53:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:19.454 13:53:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:19.454 13:53:11 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:19.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JFHRrsL1kn 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JFHRrsL1kn 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JFHRrsL1kn 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JFHRrsL1kn 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UKutkzjBYs 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:19.455 13:53:11 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UKutkzjBYs 00:42:19.455 13:53:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UKutkzjBYs 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UKutkzjBYs 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=478058 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:19.455 13:53:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 478058 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 478058 ']' 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:19.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:19.455 13:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:19.713 [2024-10-14 13:53:11.345006] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:42:19.713 [2024-10-14 13:53:11.345107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478058 ] 00:42:19.713 [2024-10-14 13:53:11.403466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.713 [2024-10-14 13:53:11.449932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:19.972 13:53:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:19.972 [2024-10-14 13:53:11.705233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:19.972 null0 00:42:19.972 [2024-10-14 13:53:11.737266] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:19.972 [2024-10-14 13:53:11.737778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:19.972 13:53:11 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:19.972 [2024-10-14 13:53:11.761312] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:19.972 request: 00:42:19.972 { 00:42:19.972 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:19.972 "secure_channel": false, 00:42:19.972 "listen_address": { 00:42:19.972 "trtype": "tcp", 00:42:19.972 "traddr": "127.0.0.1", 00:42:19.972 "trsvcid": "4420" 00:42:19.972 }, 00:42:19.972 "method": "nvmf_subsystem_add_listener", 00:42:19.972 "req_id": 1 00:42:19.972 } 00:42:19.972 Got JSON-RPC error response 00:42:19.972 response: 00:42:19.972 { 00:42:19.972 "code": -32602, 00:42:19.972 "message": "Invalid parameters" 00:42:19.972 } 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:19.972 13:53:11 keyring_file -- keyring/file.sh@47 -- # bperfpid=478063 00:42:19.972 13:53:11 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:19.972 13:53:11 keyring_file -- keyring/file.sh@49 -- # waitforlisten 478063 /var/tmp/bperf.sock 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 478063 ']' 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:19.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:19.972 13:53:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:19.972 [2024-10-14 13:53:11.812485] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:42:19.972 [2024-10-14 13:53:11.812569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478063 ] 00:42:20.229 [2024-10-14 13:53:11.870385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:20.229 [2024-10-14 13:53:11.915274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:20.229 13:53:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:20.229 13:53:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:20.229 13:53:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:20.229 13:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:20.487 13:53:12 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UKutkzjBYs 00:42:20.487 13:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UKutkzjBYs 00:42:20.744 13:53:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:20.744 13:53:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:20.744 13:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.744 13:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.744 13:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.002 13:53:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JFHRrsL1kn == \/\t\m\p\/\t\m\p\.\J\F\H\R\r\s\L\1\k\n ]] 00:42:21.002 13:53:12 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:21.002 13:53:12 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:21.002 13:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.002 13:53:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.002 13:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:21.261 13:53:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.UKutkzjBYs == \/\t\m\p\/\t\m\p\.\U\K\u\t\k\z\j\B\Y\s ]] 00:42:21.261 13:53:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:21.261 13:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:21.261 13:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.261 13:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.261 13:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.261 13:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.826 13:53:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:21.826 13:53:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:21.826 13:53:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:21.826 13:53:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:21.826 13:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:22.084 [2024-10-14 13:53:13.902691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:22.342 nvme0n1 00:42:22.342 13:53:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:22.342 13:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:22.342 13:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.342 13:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.342 13:53:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.342 13:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:22.600 13:53:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:22.600 13:53:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:22.600 13:53:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:22.600 13:53:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.600 13:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.600 13:53:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.600 13:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:22.858 13:53:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:22.858 13:53:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:22.858 Running I/O for 1 seconds... 00:42:24.232 10125.00 IOPS, 39.55 MiB/s 00:42:24.232 Latency(us) 00:42:24.232 [2024-10-14T11:53:16.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:24.232 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:24.232 nvme0n1 : 1.01 10179.94 39.77 0.00 0.00 12536.13 5242.88 24563.86 00:42:24.232 [2024-10-14T11:53:16.085Z] =================================================================================================================== 00:42:24.232 [2024-10-14T11:53:16.085Z] Total : 10179.94 39.77 0.00 0.00 12536.13 5242.88 24563.86 00:42:24.232 { 00:42:24.232 "results": [ 00:42:24.232 { 00:42:24.232 "job": "nvme0n1", 00:42:24.232 "core_mask": "0x2", 00:42:24.232 "workload": "randrw", 00:42:24.232 "percentage": 50, 00:42:24.232 "status": "finished", 00:42:24.232 "queue_depth": 128, 00:42:24.232 "io_size": 4096, 00:42:24.232 "runtime": 1.007275, 00:42:24.232 "iops": 10179.940929736169, 00:42:24.232 "mibps": 39.76539425678191, 00:42:24.232 "io_failed": 0, 00:42:24.232 "io_timeout": 0, 00:42:24.232 "avg_latency_us": 12536.130092971849, 00:42:24.232 "min_latency_us": 5242.88, 00:42:24.232 "max_latency_us": 24563.863703703704 00:42:24.232 } 00:42:24.232 ], 00:42:24.232 "core_count": 1 00:42:24.232 } 00:42:24.232 13:53:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:24.232 13:53:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.232 13:53:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.510 13:53:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:24.510 13:53:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:24.510 13:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:24.510 13:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.510 13:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.510 13:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.510 13:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:24.802 13:53:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:24.802 13:53:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:24.802 13:53:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:24.802 13:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:25.107 [2024-10-14 13:53:16.765220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:25.107 [2024-10-14 13:53:16.766146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49f60 (107): Transport endpoint is not connected 00:42:25.107 [2024-10-14 13:53:16.767144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49f60 (9): Bad file descriptor 00:42:25.107 [2024-10-14 13:53:16.768143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:25.107 [2024-10-14 13:53:16.768162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:25.107 [2024-10-14 13:53:16.768176] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:25.107 [2024-10-14 13:53:16.768192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:25.107 request: 00:42:25.107 { 00:42:25.107 "name": "nvme0", 00:42:25.107 "trtype": "tcp", 00:42:25.107 "traddr": "127.0.0.1", 00:42:25.107 "adrfam": "ipv4", 00:42:25.107 "trsvcid": "4420", 00:42:25.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:25.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:25.107 "prchk_reftag": false, 00:42:25.107 "prchk_guard": false, 00:42:25.107 "hdgst": false, 00:42:25.107 "ddgst": false, 00:42:25.107 "psk": "key1", 00:42:25.107 "allow_unrecognized_csi": false, 00:42:25.107 "method": "bdev_nvme_attach_controller", 00:42:25.107 "req_id": 1 00:42:25.107 } 00:42:25.107 Got JSON-RPC error response 00:42:25.107 response: 00:42:25.107 { 00:42:25.107 "code": -5, 00:42:25.107 "message": "Input/output error" 00:42:25.107 } 00:42:25.107 13:53:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:25.107 13:53:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:25.107 13:53:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:25.107 13:53:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:25.107 13:53:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:25.107 13:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:25.107 13:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:25.107 13:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:25.107 13:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:25.107 13:53:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.364 13:53:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:25.365 13:53:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:25.365 13:53:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:25.365 13:53:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:25.365 13:53:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:25.365 13:53:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:25.365 13:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.622 13:53:17 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:25.622 13:53:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:25.622 13:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:25.879 13:53:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:25.879 13:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:26.137 13:53:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:26.137 13:53:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:26.137 13:53:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.394 13:53:18 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:26.394 13:53:18 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.JFHRrsL1kn 00:42:26.394 13:53:18 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:26.394 13:53:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.394 13:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.652 [2024-10-14 13:53:18.399196] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JFHRrsL1kn': 0100660 00:42:26.652 [2024-10-14 13:53:18.399230] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:26.652 request: 00:42:26.652 { 00:42:26.652 "name": "key0", 00:42:26.652 "path": "/tmp/tmp.JFHRrsL1kn", 00:42:26.652 "method": "keyring_file_add_key", 00:42:26.652 "req_id": 1 00:42:26.652 } 00:42:26.652 Got JSON-RPC error response 00:42:26.652 response: 00:42:26.652 { 00:42:26.652 "code": -1, 00:42:26.652 "message": "Operation not permitted" 00:42:26.652 } 00:42:26.652 13:53:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:26.652 13:53:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:26.652 13:53:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:26.652 13:53:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:26.652 13:53:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.JFHRrsL1kn 00:42:26.652 13:53:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.652 13:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JFHRrsL1kn 00:42:26.909 13:53:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.JFHRrsL1kn 00:42:26.909 13:53:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:26.909 13:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:26.909 13:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:26.909 13:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:26.909 13:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.909 13:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:27.167 13:53:18 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:27.167 13:53:18 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:27.167 13:53:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.167 13:53:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.425 [2024-10-14 13:53:19.221445] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JFHRrsL1kn': No such file or directory 00:42:27.425 [2024-10-14 13:53:19.221494] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:27.425 [2024-10-14 13:53:19.221516] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:27.425 [2024-10-14 13:53:19.221529] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:27.425 [2024-10-14 13:53:19.221555] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:27.425 [2024-10-14 13:53:19.221567] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:27.425 request: 00:42:27.425 { 00:42:27.425 "name": "nvme0", 00:42:27.425 "trtype": "tcp", 00:42:27.425 "traddr": "127.0.0.1", 00:42:27.425 "adrfam": "ipv4", 00:42:27.425 "trsvcid": "4420", 00:42:27.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:27.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:27.425 "prchk_reftag": false, 00:42:27.425 "prchk_guard": false, 00:42:27.425 "hdgst": false, 00:42:27.425 "ddgst": false, 00:42:27.425 "psk": "key0", 00:42:27.425 "allow_unrecognized_csi": false, 00:42:27.425 "method": "bdev_nvme_attach_controller", 00:42:27.425 "req_id": 1 00:42:27.425 } 00:42:27.425 Got JSON-RPC error response 00:42:27.425 response: 00:42:27.425 { 00:42:27.425 "code": -19, 00:42:27.425 "message": "No such device" 00:42:27.425 } 00:42:27.425 13:53:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:27.425 13:53:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:27.425 13:53:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:27.425 13:53:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:27.425 13:53:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:27.425 13:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:27.683 13:53:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LmeXGsGGXM 00:42:27.683 13:53:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:27.683 13:53:19 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:27.940 13:53:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LmeXGsGGXM 00:42:27.940 13:53:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LmeXGsGGXM 00:42:27.940 13:53:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LmeXGsGGXM 00:42:27.940 13:53:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LmeXGsGGXM 00:42:27.940 13:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LmeXGsGGXM 00:42:28.198 13:53:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:28.199 13:53:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:28.456 nvme0n1 00:42:28.456 13:53:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:28.456 13:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:28.456 13:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:28.456 13:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.456 13:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:28.456 13:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.715 13:53:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:28.715 13:53:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:28.715 13:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:28.972 13:53:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:28.972 13:53:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:28.972 13:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.972 13:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.972 13:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:29.230 13:53:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:29.230 13:53:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:29.230 13:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:29.230 13:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.230 13:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.230 13:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:29.230 13:53:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.488 13:53:21 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:29.488 13:53:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:29.488 13:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:29.745 13:53:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:29.745 13:53:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:29.745 13:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:30.003 13:53:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:30.003 13:53:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LmeXGsGGXM 00:42:30.003 13:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LmeXGsGGXM 00:42:30.261 13:53:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UKutkzjBYs 00:42:30.261 13:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UKutkzjBYs 00:42:30.519 13:53:22 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:30.519 13:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:31.085 nvme0n1 00:42:31.085 13:53:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:31.085 13:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:31.343 13:53:23 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:31.343 "subsystems": [ 00:42:31.343 { 00:42:31.343 "subsystem": "keyring", 00:42:31.343 "config": [ 00:42:31.343 { 00:42:31.343 "method": "keyring_file_add_key", 00:42:31.343 "params": { 00:42:31.343 "name": "key0", 00:42:31.343 "path": "/tmp/tmp.LmeXGsGGXM" 00:42:31.343 } 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "method": "keyring_file_add_key", 00:42:31.343 "params": { 00:42:31.343 "name": "key1", 00:42:31.343 "path": "/tmp/tmp.UKutkzjBYs" 00:42:31.343 } 00:42:31.343 } 00:42:31.343 ] 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "subsystem": "iobuf", 00:42:31.343 "config": [ 00:42:31.343 { 00:42:31.343 "method": "iobuf_set_options", 00:42:31.343 "params": { 00:42:31.343 "small_pool_count": 8192, 00:42:31.343 "large_pool_count": 1024, 00:42:31.343 "small_bufsize": 8192, 00:42:31.343 "large_bufsize": 135168 00:42:31.343 } 00:42:31.343 } 00:42:31.343 ] 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "subsystem": "sock", 00:42:31.343 "config": [ 00:42:31.343 { 00:42:31.343 "method": "sock_set_default_impl", 00:42:31.343 "params": { 00:42:31.343 "impl_name": "posix" 00:42:31.343 } 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "method": "sock_impl_set_options", 00:42:31.343 "params": { 00:42:31.343 "impl_name": "ssl", 00:42:31.343 "recv_buf_size": 4096, 00:42:31.343 "send_buf_size": 4096, 00:42:31.343 "enable_recv_pipe": true, 00:42:31.343 "enable_quickack": false, 00:42:31.343 "enable_placement_id": 0, 00:42:31.343 "enable_zerocopy_send_server": true, 00:42:31.343 "enable_zerocopy_send_client": false, 00:42:31.343 "zerocopy_threshold": 0, 00:42:31.343 "tls_version": 0, 00:42:31.343 "enable_ktls": false 00:42:31.343 } 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "method": "sock_impl_set_options", 00:42:31.343 "params": { 00:42:31.343 "impl_name": "posix", 00:42:31.343 "recv_buf_size": 2097152, 00:42:31.343 "send_buf_size": 2097152, 00:42:31.343 "enable_recv_pipe": true, 00:42:31.343 "enable_quickack": false, 00:42:31.343 "enable_placement_id": 0, 00:42:31.343 "enable_zerocopy_send_server": true, 00:42:31.343 "enable_zerocopy_send_client": false, 00:42:31.343 "zerocopy_threshold": 0, 00:42:31.343 "tls_version": 0, 00:42:31.343 "enable_ktls": false 00:42:31.343 } 00:42:31.343 } 00:42:31.343 ] 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "subsystem": "vmd", 00:42:31.343 "config": [] 00:42:31.343 }, 00:42:31.343 { 00:42:31.343 "subsystem": "accel", 00:42:31.343 "config": [ 00:42:31.343 { 00:42:31.343 "method": "accel_set_options", 00:42:31.343 "params": { 00:42:31.343 "small_cache_size": 128, 00:42:31.343 "large_cache_size": 16, 00:42:31.344 "task_count": 2048, 00:42:31.344 "sequence_count": 2048, 00:42:31.344 "buf_count": 2048 00:42:31.344 } 00:42:31.344 } 00:42:31.344 ] 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "subsystem": "bdev", 00:42:31.344 "config": [ 00:42:31.344 { 00:42:31.344 "method": "bdev_set_options", 00:42:31.344 "params": { 00:42:31.344 "bdev_io_pool_size": 65535, 00:42:31.344 "bdev_io_cache_size": 256, 00:42:31.344 "bdev_auto_examine": true, 00:42:31.344 "iobuf_small_cache_size": 128, 00:42:31.344 "iobuf_large_cache_size": 16 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_raid_set_options", 00:42:31.344 "params": { 00:42:31.344 "process_window_size_kb": 1024, 00:42:31.344 "process_max_bandwidth_mb_sec": 0 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_iscsi_set_options", 00:42:31.344 "params": { 00:42:31.344 "timeout_sec": 30 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_nvme_set_options", 00:42:31.344 "params": { 00:42:31.344 "action_on_timeout": "none", 00:42:31.344 "timeout_us": 0, 00:42:31.344 "timeout_admin_us": 0, 00:42:31.344 "keep_alive_timeout_ms": 10000, 00:42:31.344 "arbitration_burst": 0, 00:42:31.344 "low_priority_weight": 0, 00:42:31.344 "medium_priority_weight": 0, 00:42:31.344 "high_priority_weight": 0, 00:42:31.344 "nvme_adminq_poll_period_us": 10000, 00:42:31.344 "nvme_ioq_poll_period_us": 0, 00:42:31.344 "io_queue_requests": 512, 00:42:31.344 "delay_cmd_submit": true, 00:42:31.344 "transport_retry_count": 4, 00:42:31.344 "bdev_retry_count": 3, 00:42:31.344 "transport_ack_timeout": 0, 00:42:31.344 "ctrlr_loss_timeout_sec": 0, 00:42:31.344 "reconnect_delay_sec": 0, 00:42:31.344 "fast_io_fail_timeout_sec": 0, 00:42:31.344 "disable_auto_failback": false, 00:42:31.344 "generate_uuids": false, 00:42:31.344 "transport_tos": 0, 00:42:31.344 "nvme_error_stat": false, 00:42:31.344 "rdma_srq_size": 0, 00:42:31.344 "io_path_stat": false, 00:42:31.344 "allow_accel_sequence": false, 00:42:31.344 "rdma_max_cq_size": 0, 00:42:31.344 "rdma_cm_event_timeout_ms": 0, 00:42:31.344 "dhchap_digests": [ 00:42:31.344 "sha256", 00:42:31.344 "sha384", 00:42:31.344 "sha512" 00:42:31.344 ], 00:42:31.344 "dhchap_dhgroups": [ 00:42:31.344 "null", 00:42:31.344 "ffdhe2048", 00:42:31.344 "ffdhe3072", 00:42:31.344 "ffdhe4096", 00:42:31.344 "ffdhe6144", 00:42:31.344 "ffdhe8192" 00:42:31.344 ] 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_nvme_attach_controller", 00:42:31.344 "params": { 00:42:31.344 "name": "nvme0", 00:42:31.344 "trtype": "TCP", 00:42:31.344 "adrfam": "IPv4", 00:42:31.344 "traddr": "127.0.0.1", 00:42:31.344 "trsvcid": "4420", 00:42:31.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:31.344 "prchk_reftag": false, 00:42:31.344 "prchk_guard": false, 00:42:31.344 "ctrlr_loss_timeout_sec": 0, 00:42:31.344 "reconnect_delay_sec": 0, 00:42:31.344 "fast_io_fail_timeout_sec": 0, 00:42:31.344 "psk": "key0", 00:42:31.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:31.344 "hdgst": false, 00:42:31.344 "ddgst": false, 00:42:31.344 "multipath": "multipath" 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_nvme_set_hotplug", 00:42:31.344 "params": { 00:42:31.344 "period_us": 100000, 00:42:31.344 "enable": false 00:42:31.344 } 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "method": "bdev_wait_for_examine" 00:42:31.344 } 00:42:31.344 ] 00:42:31.344 }, 00:42:31.344 { 00:42:31.344 "subsystem": "nbd", 00:42:31.344 "config": [] 00:42:31.344 } 00:42:31.344 ] 00:42:31.344 }' 00:42:31.344 13:53:23 keyring_file -- keyring/file.sh@115 -- # killprocess 478063 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 478063 ']' 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@954 -- # kill -0 478063 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478063 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478063' 00:42:31.344 killing process with pid 478063 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@969 -- # kill 478063 00:42:31.344 Received shutdown signal, test time was about 1.000000 seconds 00:42:31.344 00:42:31.344 Latency(us) 00:42:31.344 [2024-10-14T11:53:23.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.344 [2024-10-14T11:53:23.197Z] =================================================================================================================== 00:42:31.344 [2024-10-14T11:53:23.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:31.344 13:53:23 keyring_file -- common/autotest_common.sh@974 -- # wait 478063 00:42:31.603 13:53:23 keyring_file -- keyring/file.sh@118 -- # bperfpid=479529 00:42:31.603 13:53:23 keyring_file -- keyring/file.sh@120 -- # waitforlisten 479529 /var/tmp/bperf.sock 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 479529 ']' 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:31.603 13:53:23 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:31.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:31.603 13:53:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.603 13:53:23 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:31.603 "subsystems": [ 00:42:31.603 { 00:42:31.603 "subsystem": "keyring", 00:42:31.603 "config": [ 00:42:31.603 { 00:42:31.603 "method": "keyring_file_add_key", 00:42:31.603 "params": { 00:42:31.603 "name": "key0", 00:42:31.603 "path": "/tmp/tmp.LmeXGsGGXM" 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "keyring_file_add_key", 00:42:31.603 "params": { 00:42:31.603 "name": "key1", 00:42:31.603 "path": "/tmp/tmp.UKutkzjBYs" 00:42:31.603 } 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "iobuf", 00:42:31.603 "config": [ 00:42:31.603 { 00:42:31.603 "method": "iobuf_set_options", 00:42:31.603 "params": { 00:42:31.603 "small_pool_count": 8192, 00:42:31.603 "large_pool_count": 1024, 00:42:31.603 "small_bufsize": 8192, 00:42:31.603 "large_bufsize": 135168 00:42:31.603 } 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "sock", 00:42:31.603 "config": [ 00:42:31.603 { 00:42:31.603 "method": "sock_set_default_impl", 00:42:31.603 "params": { 00:42:31.603 "impl_name": "posix" 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "sock_impl_set_options", 00:42:31.603 "params": { 00:42:31.603 "impl_name": "ssl", 00:42:31.603 "recv_buf_size": 4096, 00:42:31.603 "send_buf_size": 4096, 00:42:31.603 "enable_recv_pipe": true, 00:42:31.603 "enable_quickack": false, 00:42:31.603 "enable_placement_id": 0, 00:42:31.603 "enable_zerocopy_send_server": true, 00:42:31.603 "enable_zerocopy_send_client": false, 00:42:31.603 "zerocopy_threshold": 0, 00:42:31.603 "tls_version": 0, 00:42:31.603 "enable_ktls": false 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "sock_impl_set_options", 00:42:31.603 "params": { 00:42:31.603 "impl_name": "posix", 00:42:31.603 "recv_buf_size": 2097152, 00:42:31.603 "send_buf_size": 2097152, 00:42:31.603 "enable_recv_pipe": true, 00:42:31.603 "enable_quickack": false, 00:42:31.603 "enable_placement_id": 0, 00:42:31.603 "enable_zerocopy_send_server": true, 00:42:31.603 "enable_zerocopy_send_client": false, 00:42:31.603 "zerocopy_threshold": 0, 00:42:31.603 "tls_version": 0, 00:42:31.603 "enable_ktls": false 00:42:31.603 } 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "vmd", 00:42:31.603 "config": [] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "accel", 00:42:31.603 "config": [ 00:42:31.603 { 00:42:31.603 "method": "accel_set_options", 00:42:31.603 "params": { 00:42:31.603 "small_cache_size": 128, 00:42:31.603 "large_cache_size": 16, 00:42:31.603 "task_count": 2048, 00:42:31.603 "sequence_count": 2048, 00:42:31.603 "buf_count": 2048 00:42:31.603 } 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "bdev", 00:42:31.603 "config": [ 00:42:31.603 { 00:42:31.603 "method": "bdev_set_options", 00:42:31.603 "params": { 00:42:31.603 "bdev_io_pool_size": 65535, 00:42:31.603 "bdev_io_cache_size": 256, 00:42:31.603 "bdev_auto_examine": true, 00:42:31.603 "iobuf_small_cache_size": 128, 00:42:31.603 "iobuf_large_cache_size": 16 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_raid_set_options", 00:42:31.603 "params": { 00:42:31.603 "process_window_size_kb": 1024, 00:42:31.603 "process_max_bandwidth_mb_sec": 0 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_iscsi_set_options", 00:42:31.603 "params": { 00:42:31.603 "timeout_sec": 30 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_nvme_set_options", 00:42:31.603 "params": { 00:42:31.603 "action_on_timeout": "none", 00:42:31.603 "timeout_us": 0, 00:42:31.603 "timeout_admin_us": 0, 00:42:31.603 "keep_alive_timeout_ms": 10000, 00:42:31.603 "arbitration_burst": 0, 00:42:31.603 "low_priority_weight": 0, 00:42:31.603 "medium_priority_weight": 0, 00:42:31.603 "high_priority_weight": 0, 00:42:31.603 "nvme_adminq_poll_period_us": 10000, 00:42:31.603 "nvme_ioq_poll_period_us": 0, 00:42:31.603 "io_queue_requests": 512, 00:42:31.603 "delay_cmd_submit": true, 00:42:31.603 "transport_retry_count": 4, 00:42:31.603 "bdev_retry_count": 3, 00:42:31.603 "transport_ack_timeout": 0, 00:42:31.603 "ctrlr_loss_timeout_sec": 0, 00:42:31.603 "reconnect_delay_sec": 0, 00:42:31.603 "fast_io_fail_timeout_sec": 0, 00:42:31.603 "disable_auto_failback": false, 00:42:31.603 "generate_uuids": false, 00:42:31.603 "transport_tos": 0, 00:42:31.603 "nvme_error_stat": false, 00:42:31.603 "rdma_srq_size": 0, 00:42:31.603 "io_path_stat": false, 00:42:31.603 "allow_accel_sequence": false, 00:42:31.603 "rdma_max_cq_size": 0, 00:42:31.603 "rdma_cm_event_timeout_ms": 0, 00:42:31.603 "dhchap_digests": [ 00:42:31.603 "sha256", 00:42:31.603 "sha384", 00:42:31.603 "sha512" 00:42:31.603 ], 00:42:31.603 "dhchap_dhgroups": [ 00:42:31.603 "null", 00:42:31.603 "ffdhe2048", 00:42:31.603 "ffdhe3072", 00:42:31.603 "ffdhe4096", 00:42:31.603 "ffdhe6144", 00:42:31.603 "ffdhe8192" 00:42:31.603 ] 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_nvme_attach_controller", 00:42:31.603 "params": { 00:42:31.603 "name": "nvme0", 00:42:31.603 "trtype": "TCP", 00:42:31.603 "adrfam": "IPv4", 00:42:31.603 "traddr": "127.0.0.1", 00:42:31.603 "trsvcid": "4420", 00:42:31.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:31.603 "prchk_reftag": false, 00:42:31.603 "prchk_guard": false, 00:42:31.603 "ctrlr_loss_timeout_sec": 0, 00:42:31.603 "reconnect_delay_sec": 0, 00:42:31.603 "fast_io_fail_timeout_sec": 0, 00:42:31.603 "psk": "key0", 00:42:31.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:31.603 "hdgst": false, 00:42:31.603 "ddgst": false, 00:42:31.603 "multipath": "multipath" 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_nvme_set_hotplug", 00:42:31.603 "params": { 00:42:31.603 "period_us": 100000, 00:42:31.603 "enable": false 00:42:31.603 } 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "method": "bdev_wait_for_examine" 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }, 00:42:31.603 { 00:42:31.603 "subsystem": "nbd", 00:42:31.603 "config": [] 00:42:31.603 } 00:42:31.603 ] 00:42:31.603 }' 00:42:31.603 [2024-10-14 13:53:23.256284] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:42:31.603 [2024-10-14 13:53:23.256362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479529 ] 00:42:31.603 [2024-10-14 13:53:23.316243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.603 [2024-10-14 13:53:23.365950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.861 [2024-10-14 13:53:23.548643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:31.861 13:53:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:31.861 13:53:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:31.861 13:53:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:31.861 13:53:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:31.861 13:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.120 13:53:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:32.120 13:53:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:32.120 13:53:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:32.120 13:53:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.120 13:53:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.120 13:53:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.120 13:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.378 13:53:24 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:32.378 13:53:24 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:32.378 13:53:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:32.378 13:53:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:32.378 13:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.378 13:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:32.378 13:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.636 13:53:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:32.636 13:53:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:32.636 13:53:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:32.636 13:53:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:32.894 13:53:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:32.894 13:53:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:32.894 13:53:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LmeXGsGGXM /tmp/tmp.UKutkzjBYs 00:42:32.894 13:53:24 keyring_file -- keyring/file.sh@20 -- # killprocess 479529 00:42:32.894 13:53:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 479529 ']' 00:42:32.894 13:53:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 479529 00:42:32.894 13:53:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:32.894 13:53:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:32.894 13:53:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479529 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479529' 00:42:33.152 killing process with pid 479529 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@969 -- # kill 479529 00:42:33.152 Received shutdown signal, test time was about 1.000000 seconds 00:42:33.152 00:42:33.152 Latency(us) 00:42:33.152 [2024-10-14T11:53:25.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.152 [2024-10-14T11:53:25.005Z] =================================================================================================================== 00:42:33.152 [2024-10-14T11:53:25.005Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@974 -- # wait 479529 00:42:33.152 13:53:24 keyring_file -- keyring/file.sh@21 -- # killprocess 478058 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 478058 ']' 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 478058 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 478058 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 478058' 00:42:33.152 killing process with pid 478058 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@969 -- # kill 478058 00:42:33.152 13:53:24 keyring_file -- common/autotest_common.sh@974 -- # wait 478058 00:42:33.720 00:42:33.720 real 0m14.268s 00:42:33.720 user 0m36.529s 00:42:33.720 sys 0m3.175s 00:42:33.720 13:53:25 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:33.720 13:53:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:33.720 ************************************ 00:42:33.720 END TEST keyring_file 00:42:33.720 ************************************ 00:42:33.720 13:53:25 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:33.720 13:53:25 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:33.720 13:53:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:33.720 13:53:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:33.720 13:53:25 -- common/autotest_common.sh@10 -- # set +x 00:42:33.720 ************************************ 00:42:33.720 START TEST keyring_linux 00:42:33.720 ************************************ 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:33.720 Joined session keyring: 564060257 00:42:33.720 * Looking for test storage... 00:42:33.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.720 --rc genhtml_branch_coverage=1 00:42:33.720 --rc genhtml_function_coverage=1 00:42:33.720 --rc genhtml_legend=1 00:42:33.720 --rc geninfo_all_blocks=1 00:42:33.720 --rc geninfo_unexecuted_blocks=1 00:42:33.720 00:42:33.720 ' 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.720 --rc genhtml_branch_coverage=1 00:42:33.720 --rc genhtml_function_coverage=1 00:42:33.720 --rc genhtml_legend=1 00:42:33.720 --rc geninfo_all_blocks=1 00:42:33.720 --rc geninfo_unexecuted_blocks=1 00:42:33.720 00:42:33.720 ' 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.720 --rc genhtml_branch_coverage=1 00:42:33.720 --rc genhtml_function_coverage=1 00:42:33.720 --rc genhtml_legend=1 00:42:33.720 --rc geninfo_all_blocks=1 00:42:33.720 --rc geninfo_unexecuted_blocks=1 00:42:33.720 00:42:33.720 ' 00:42:33.720 13:53:25 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:33.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.720 --rc genhtml_branch_coverage=1 00:42:33.720 --rc genhtml_function_coverage=1 00:42:33.720 --rc genhtml_legend=1 00:42:33.720 --rc geninfo_all_blocks=1 00:42:33.720 --rc geninfo_unexecuted_blocks=1 00:42:33.720 00:42:33.720 ' 00:42:33.720 13:53:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:33.720 13:53:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:33.720 13:53:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:33.720 13:53:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.720 13:53:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.720 13:53:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.720 13:53:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:33.720 13:53:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:33.720 13:53:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:33.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:33.721 /tmp/:spdk-test:key0 00:42:33.721 13:53:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:33.721 13:53:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:33.721 13:53:25 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:33.979 13:53:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:33.979 13:53:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:33.979 /tmp/:spdk-test:key1 00:42:33.979 13:53:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=479896 00:42:33.979 13:53:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:33.979 13:53:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 479896 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 479896 ']' 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:33.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:33.979 13:53:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:33.979 [2024-10-14 13:53:25.649854] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:42:33.979 [2024-10-14 13:53:25.649944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479896 ] 00:42:33.979 [2024-10-14 13:53:25.710699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.979 [2024-10-14 13:53:25.755598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.237 13:53:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:34.238 [2024-10-14 13:53:26.014642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:34.238 null0 00:42:34.238 [2024-10-14 13:53:26.046699] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:34.238 [2024-10-14 13:53:26.047214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:34.238 374047764 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:34.238 619427956 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=480024 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:34.238 13:53:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 480024 /var/tmp/bperf.sock 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 480024 ']' 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:34.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:34.238 13:53:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:34.496 [2024-10-14 13:53:26.115046] Starting SPDK v25.01-pre git sha1 b6849ff47 / DPDK 23.11.0 initialization... 00:42:34.496 [2024-10-14 13:53:26.115124] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480024 ] 00:42:34.496 [2024-10-14 13:53:26.172473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:34.496 [2024-10-14 13:53:26.217076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:34.496 13:53:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:34.496 13:53:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:34.496 13:53:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:34.496 13:53:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:34.754 13:53:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:34.754 13:53:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:35.320 13:53:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:35.321 13:53:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:35.578 [2024-10-14 13:53:27.188793] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:35.578 nvme0n1 00:42:35.578 13:53:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:35.578 13:53:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:35.578 13:53:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:35.578 13:53:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:35.578 13:53:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:35.578 13:53:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.836 13:53:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:35.836 13:53:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:35.836 13:53:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:35.836 13:53:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:35.836 13:53:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.836 13:53:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.836 13:53:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@25 -- # sn=374047764 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 374047764 == \3\7\4\0\4\7\7\6\4 ]] 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 374047764 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:36.094 13:53:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:36.094 Running I/O for 1 seconds... 00:42:37.473 11388.00 IOPS, 44.48 MiB/s 00:42:37.474 Latency(us) 00:42:37.474 [2024-10-14T11:53:29.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.474 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:37.474 nvme0n1 : 1.01 11397.52 44.52 0.00 0.00 11164.28 8932.31 21748.24 00:42:37.474 [2024-10-14T11:53:29.327Z] =================================================================================================================== 00:42:37.474 [2024-10-14T11:53:29.327Z] Total : 11397.52 44.52 0.00 0.00 11164.28 8932.31 21748.24 00:42:37.474 { 00:42:37.474 "results": [ 00:42:37.474 { 00:42:37.474 "job": "nvme0n1", 00:42:37.474 "core_mask": "0x2", 00:42:37.474 "workload": "randread", 00:42:37.474 "status": "finished", 00:42:37.474 "queue_depth": 128, 00:42:37.474 "io_size": 4096, 00:42:37.474 "runtime": 1.010483, 00:42:37.474 "iops": 11397.519799937258, 00:42:37.474 "mibps": 44.52156171850491, 00:42:37.474 "io_failed": 0, 00:42:37.474 "io_timeout": 0, 00:42:37.474 "avg_latency_us": 11164.279808978032, 00:42:37.474 "min_latency_us": 8932.314074074075, 00:42:37.474 "max_latency_us": 21748.242962962962 00:42:37.474 } 00:42:37.474 ], 00:42:37.474 "core_count": 1 00:42:37.474 } 00:42:37.474 13:53:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:37.474 13:53:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:37.474 13:53:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:37.474 13:53:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:37.474 13:53:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:37.474 13:53:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:37.474 13:53:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:37.474 13:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.731 13:53:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:37.731 13:53:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:37.731 13:53:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:37.731 13:53:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:37.731 13:53:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:37.732 13:53:29 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:37.732 13:53:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:37.990 [2024-10-14 13:53:29.760878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:37.990 [2024-10-14 13:53:29.761703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1cf0 (107): Transport endpoint is not connected 00:42:37.990 [2024-10-14 13:53:29.762695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1cf0 (9): Bad file descriptor 00:42:37.990 [2024-10-14 13:53:29.763695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:37.990 [2024-10-14 13:53:29.763712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:37.990 [2024-10-14 13:53:29.763741] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:37.990 [2024-10-14 13:53:29.763755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:37.990 request: 00:42:37.990 { 00:42:37.990 "name": "nvme0", 00:42:37.990 "trtype": "tcp", 00:42:37.990 "traddr": "127.0.0.1", 00:42:37.990 "adrfam": "ipv4", 00:42:37.990 "trsvcid": "4420", 00:42:37.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:37.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:37.990 "prchk_reftag": false, 00:42:37.990 "prchk_guard": false, 00:42:37.990 "hdgst": false, 00:42:37.990 "ddgst": false, 00:42:37.990 "psk": ":spdk-test:key1", 00:42:37.990 "allow_unrecognized_csi": false, 00:42:37.990 "method": "bdev_nvme_attach_controller", 00:42:37.990 "req_id": 1 00:42:37.990 } 00:42:37.990 Got JSON-RPC error response 00:42:37.990 response: 00:42:37.990 { 00:42:37.990 "code": -5, 00:42:37.990 "message": "Input/output error" 00:42:37.990 } 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@33 -- # sn=374047764 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 374047764 00:42:37.990 1 links removed 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@33 -- # sn=619427956 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 619427956 00:42:37.990 1 links removed 00:42:37.990 13:53:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 480024 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 480024 ']' 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 480024 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480024 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480024' 00:42:37.990 killing process with pid 480024 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 480024 00:42:37.990 Received shutdown signal, test time was about 1.000000 seconds 00:42:37.990 00:42:37.990 Latency(us) 00:42:37.990 [2024-10-14T11:53:29.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.990 [2024-10-14T11:53:29.843Z] =================================================================================================================== 00:42:37.990 [2024-10-14T11:53:29.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:37.990 13:53:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 480024 00:42:38.248 13:53:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 479896 00:42:38.248 13:53:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 479896 ']' 00:42:38.248 13:53:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 479896 00:42:38.248 13:53:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:38.248 13:53:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:38.248 13:53:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479896 00:42:38.248 13:53:30 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:38.248 13:53:30 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:38.248 13:53:30 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479896' 00:42:38.248 killing process with pid 479896 00:42:38.248 13:53:30 keyring_linux -- common/autotest_common.sh@969 -- # kill 479896 00:42:38.248 13:53:30 keyring_linux -- common/autotest_common.sh@974 -- # wait 479896 00:42:38.813 00:42:38.813 real 0m5.029s 00:42:38.813 user 0m10.000s 00:42:38.813 sys 0m1.632s 00:42:38.813 13:53:30 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:38.813 13:53:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:38.813 ************************************ 00:42:38.813 END TEST keyring_linux 00:42:38.813 ************************************ 00:42:38.813 13:53:30 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:38.813 13:53:30 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:38.813 13:53:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:38.813 13:53:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:38.813 13:53:30 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:38.813 13:53:30 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:38.813 13:53:30 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:38.813 13:53:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:38.813 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:42:38.813 13:53:30 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:38.813 13:53:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:38.813 13:53:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:38.813 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:42:40.712 INFO: APP EXITING 00:42:40.712 INFO: killing all VMs 00:42:40.712 INFO: killing vhost app 00:42:40.712 INFO: EXIT DONE 00:42:41.650 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:41.650 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:41.650 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:41.650 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:41.650 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:41.650 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:41.650 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:41.650 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:41.650 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:41.650 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:41.650 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:41.908 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:41.908 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:41.908 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:41.908 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:41.908 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:41.908 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:43.282 Cleaning 00:42:43.282 Removing: /var/run/dpdk/spdk0/config 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:43.282 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:43.282 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:43.282 Removing: /var/run/dpdk/spdk1/config 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:43.282 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:43.282 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:43.282 Removing: /var/run/dpdk/spdk2/config 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:43.282 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:43.282 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:43.282 Removing: /var/run/dpdk/spdk3/config 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:43.282 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:43.282 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:43.282 Removing: /var/run/dpdk/spdk4/config 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:43.282 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:43.283 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:43.283 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:43.283 Removing: /dev/shm/bdev_svc_trace.1 00:42:43.283 Removing: /dev/shm/nvmf_trace.0 00:42:43.283 Removing: /dev/shm/spdk_tgt_trace.pid99798 00:42:43.283 Removing: /var/run/dpdk/spdk0 00:42:43.283 Removing: /var/run/dpdk/spdk1 00:42:43.283 Removing: /var/run/dpdk/spdk2 00:42:43.283 Removing: /var/run/dpdk/spdk3 00:42:43.283 Removing: /var/run/dpdk/spdk4 00:42:43.283 Removing: /var/run/dpdk/spdk_pid100130 00:42:43.283 Removing: /var/run/dpdk/spdk_pid100817 00:42:43.283 Removing: /var/run/dpdk/spdk_pid100957 00:42:43.283 Removing: /var/run/dpdk/spdk_pid101675 00:42:43.283 Removing: /var/run/dpdk/spdk_pid101681 00:42:43.283 Removing: /var/run/dpdk/spdk_pid101939 00:42:43.283 Removing: /var/run/dpdk/spdk_pid103264 00:42:43.283 Removing: /var/run/dpdk/spdk_pid104177 00:42:43.283 Removing: /var/run/dpdk/spdk_pid104383 00:42:43.283 Removing: /var/run/dpdk/spdk_pid104693 00:42:43.283 Removing: /var/run/dpdk/spdk_pid104907 00:42:43.283 Removing: /var/run/dpdk/spdk_pid105105 00:42:43.283 Removing: /var/run/dpdk/spdk_pid105260 00:42:43.283 Removing: /var/run/dpdk/spdk_pid105412 00:42:43.283 Removing: /var/run/dpdk/spdk_pid105612 00:42:43.283 Removing: /var/run/dpdk/spdk_pid105916 00:42:43.283 Removing: /var/run/dpdk/spdk_pid108295 00:42:43.283 Removing: /var/run/dpdk/spdk_pid108472 00:42:43.283 Removing: /var/run/dpdk/spdk_pid108619 00:42:43.283 Removing: /var/run/dpdk/spdk_pid108742 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109045 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109051 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109478 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109483 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109651 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109781 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109945 00:42:43.283 Removing: /var/run/dpdk/spdk_pid109957 00:42:43.283 Removing: /var/run/dpdk/spdk_pid110388 00:42:43.283 Removing: /var/run/dpdk/spdk_pid110608 00:42:43.283 Removing: /var/run/dpdk/spdk_pid110916 00:42:43.283 Removing: /var/run/dpdk/spdk_pid113437 00:42:43.283 Removing: /var/run/dpdk/spdk_pid116181 00:42:43.283 Removing: /var/run/dpdk/spdk_pid123201 00:42:43.283 Removing: /var/run/dpdk/spdk_pid123609 00:42:43.283 Removing: /var/run/dpdk/spdk_pid126139 00:42:43.283 Removing: /var/run/dpdk/spdk_pid126412 00:42:43.283 Removing: /var/run/dpdk/spdk_pid128937 00:42:43.283 Removing: /var/run/dpdk/spdk_pid132665 00:42:43.283 Removing: /var/run/dpdk/spdk_pid134852 00:42:43.283 Removing: /var/run/dpdk/spdk_pid141271 00:42:43.283 Removing: /var/run/dpdk/spdk_pid146559 00:42:43.283 Removing: /var/run/dpdk/spdk_pid147945 00:42:43.283 Removing: /var/run/dpdk/spdk_pid149147 00:42:43.283 Removing: /var/run/dpdk/spdk_pid159427 00:42:43.283 Removing: /var/run/dpdk/spdk_pid161708 00:42:43.283 Removing: /var/run/dpdk/spdk_pid217038 00:42:43.283 Removing: /var/run/dpdk/spdk_pid220265 00:42:43.283 Removing: /var/run/dpdk/spdk_pid224042 00:42:43.283 Removing: /var/run/dpdk/spdk_pid227902 00:42:43.283 Removing: /var/run/dpdk/spdk_pid227904 00:42:43.283 Removing: /var/run/dpdk/spdk_pid228564 00:42:43.283 Removing: /var/run/dpdk/spdk_pid229105 00:42:43.283 Removing: /var/run/dpdk/spdk_pid229768 00:42:43.283 Removing: /var/run/dpdk/spdk_pid230168 00:42:43.283 Removing: /var/run/dpdk/spdk_pid230170 00:42:43.283 Removing: /var/run/dpdk/spdk_pid230432 00:42:43.283 Removing: /var/run/dpdk/spdk_pid230566 00:42:43.283 Removing: /var/run/dpdk/spdk_pid230573 00:42:43.283 Removing: /var/run/dpdk/spdk_pid231227 00:42:43.283 Removing: /var/run/dpdk/spdk_pid231764 00:42:43.283 Removing: /var/run/dpdk/spdk_pid232423 00:42:43.283 Removing: /var/run/dpdk/spdk_pid232818 00:42:43.283 Removing: /var/run/dpdk/spdk_pid232826 00:42:43.283 Removing: /var/run/dpdk/spdk_pid233075 00:42:43.283 Removing: /var/run/dpdk/spdk_pid233978 00:42:43.283 Removing: /var/run/dpdk/spdk_pid234708 00:42:43.283 Removing: /var/run/dpdk/spdk_pid240532 00:42:43.283 Removing: /var/run/dpdk/spdk_pid268793 00:42:43.283 Removing: /var/run/dpdk/spdk_pid271734 00:42:43.283 Removing: /var/run/dpdk/spdk_pid272912 00:42:43.283 Removing: /var/run/dpdk/spdk_pid274226 00:42:43.283 Removing: /var/run/dpdk/spdk_pid274370 00:42:43.283 Removing: /var/run/dpdk/spdk_pid274511 00:42:43.283 Removing: /var/run/dpdk/spdk_pid274652 00:42:43.283 Removing: /var/run/dpdk/spdk_pid275088 00:42:43.542 Removing: /var/run/dpdk/spdk_pid276406 00:42:43.542 Removing: /var/run/dpdk/spdk_pid277137 00:42:43.542 Removing: /var/run/dpdk/spdk_pid277568 00:42:43.542 Removing: /var/run/dpdk/spdk_pid279062 00:42:43.542 Removing: /var/run/dpdk/spdk_pid279479 00:42:43.542 Removing: /var/run/dpdk/spdk_pid279926 00:42:43.542 Removing: /var/run/dpdk/spdk_pid282324 00:42:43.542 Removing: /var/run/dpdk/spdk_pid285723 00:42:43.542 Removing: /var/run/dpdk/spdk_pid285724 00:42:43.542 Removing: /var/run/dpdk/spdk_pid285725 00:42:43.542 Removing: /var/run/dpdk/spdk_pid287827 00:42:43.542 Removing: /var/run/dpdk/spdk_pid290032 00:42:43.542 Removing: /var/run/dpdk/spdk_pid294172 00:42:43.542 Removing: /var/run/dpdk/spdk_pid316871 00:42:43.542 Removing: /var/run/dpdk/spdk_pid319773 00:42:43.542 Removing: /var/run/dpdk/spdk_pid324189 00:42:43.542 Removing: /var/run/dpdk/spdk_pid325122 00:42:43.542 Removing: /var/run/dpdk/spdk_pid326091 00:42:43.542 Removing: /var/run/dpdk/spdk_pid327042 00:42:43.542 Removing: /var/run/dpdk/spdk_pid329882 00:42:43.542 Removing: /var/run/dpdk/spdk_pid332224 00:42:43.542 Removing: /var/run/dpdk/spdk_pid336462 00:42:43.542 Removing: /var/run/dpdk/spdk_pid336465 00:42:43.542 Removing: /var/run/dpdk/spdk_pid339360 00:42:43.542 Removing: /var/run/dpdk/spdk_pid339498 00:42:43.542 Removing: /var/run/dpdk/spdk_pid339632 00:42:43.542 Removing: /var/run/dpdk/spdk_pid339894 00:42:43.542 Removing: /var/run/dpdk/spdk_pid339975 00:42:43.542 Removing: /var/run/dpdk/spdk_pid341098 00:42:43.542 Removing: /var/run/dpdk/spdk_pid342281 00:42:43.542 Removing: /var/run/dpdk/spdk_pid343456 00:42:43.542 Removing: /var/run/dpdk/spdk_pid344632 00:42:43.542 Removing: /var/run/dpdk/spdk_pid345806 00:42:43.542 Removing: /var/run/dpdk/spdk_pid346984 00:42:43.542 Removing: /var/run/dpdk/spdk_pid350800 00:42:43.542 Removing: /var/run/dpdk/spdk_pid351128 00:42:43.542 Removing: /var/run/dpdk/spdk_pid353146 00:42:43.542 Removing: /var/run/dpdk/spdk_pid353900 00:42:43.542 Removing: /var/run/dpdk/spdk_pid357620 00:42:43.542 Removing: /var/run/dpdk/spdk_pid359482 00:42:43.542 Removing: /var/run/dpdk/spdk_pid362896 00:42:43.542 Removing: /var/run/dpdk/spdk_pid366472 00:42:43.542 Removing: /var/run/dpdk/spdk_pid372845 00:42:43.542 Removing: /var/run/dpdk/spdk_pid377328 00:42:43.542 Removing: /var/run/dpdk/spdk_pid377335 00:42:43.542 Removing: /var/run/dpdk/spdk_pid390582 00:42:43.542 Removing: /var/run/dpdk/spdk_pid390999 00:42:43.542 Removing: /var/run/dpdk/spdk_pid391512 00:42:43.542 Removing: /var/run/dpdk/spdk_pid391915 00:42:43.542 Removing: /var/run/dpdk/spdk_pid392497 00:42:43.542 Removing: /var/run/dpdk/spdk_pid392902 00:42:43.542 Removing: /var/run/dpdk/spdk_pid393308 00:42:43.542 Removing: /var/run/dpdk/spdk_pid393753 00:42:43.542 Removing: /var/run/dpdk/spdk_pid396220 00:42:43.542 Removing: /var/run/dpdk/spdk_pid396486 00:42:43.542 Removing: /var/run/dpdk/spdk_pid400275 00:42:43.542 Removing: /var/run/dpdk/spdk_pid400337 00:42:43.542 Removing: /var/run/dpdk/spdk_pid403693 00:42:43.542 Removing: /var/run/dpdk/spdk_pid406303 00:42:43.542 Removing: /var/run/dpdk/spdk_pid413088 00:42:43.542 Removing: /var/run/dpdk/spdk_pid413493 00:42:43.542 Removing: /var/run/dpdk/spdk_pid415996 00:42:43.542 Removing: /var/run/dpdk/spdk_pid416249 00:42:43.542 Removing: /var/run/dpdk/spdk_pid418760 00:42:43.542 Removing: /var/run/dpdk/spdk_pid423062 00:42:43.542 Removing: /var/run/dpdk/spdk_pid425106 00:42:43.542 Removing: /var/run/dpdk/spdk_pid431485 00:42:43.542 Removing: /var/run/dpdk/spdk_pid436687 00:42:43.542 Removing: /var/run/dpdk/spdk_pid437868 00:42:43.542 Removing: /var/run/dpdk/spdk_pid438521 00:42:43.542 Removing: /var/run/dpdk/spdk_pid448659 00:42:43.542 Removing: /var/run/dpdk/spdk_pid450819 00:42:43.542 Removing: /var/run/dpdk/spdk_pid452827 00:42:43.542 Removing: /var/run/dpdk/spdk_pid458482 00:42:43.542 Removing: /var/run/dpdk/spdk_pid458488 00:42:43.542 Removing: /var/run/dpdk/spdk_pid461389 00:42:43.542 Removing: /var/run/dpdk/spdk_pid462786 00:42:43.542 Removing: /var/run/dpdk/spdk_pid464189 00:42:43.542 Removing: /var/run/dpdk/spdk_pid465048 00:42:43.542 Removing: /var/run/dpdk/spdk_pid466448 00:42:43.542 Removing: /var/run/dpdk/spdk_pid467213 00:42:43.542 Removing: /var/run/dpdk/spdk_pid472601 00:42:43.542 Removing: /var/run/dpdk/spdk_pid472990 00:42:43.542 Removing: /var/run/dpdk/spdk_pid473381 00:42:43.542 Removing: /var/run/dpdk/spdk_pid474881 00:42:43.542 Removing: /var/run/dpdk/spdk_pid475211 00:42:43.542 Removing: /var/run/dpdk/spdk_pid475611 00:42:43.542 Removing: /var/run/dpdk/spdk_pid478058 00:42:43.542 Removing: /var/run/dpdk/spdk_pid478063 00:42:43.542 Removing: /var/run/dpdk/spdk_pid479529 00:42:43.542 Removing: /var/run/dpdk/spdk_pid479896 00:42:43.542 Removing: /var/run/dpdk/spdk_pid480024 00:42:43.542 Removing: /var/run/dpdk/spdk_pid98114 00:42:43.542 Removing: /var/run/dpdk/spdk_pid98857 00:42:43.542 Removing: /var/run/dpdk/spdk_pid99798 00:42:43.542 Clean 00:42:43.542 13:53:35 -- common/autotest_common.sh@1451 -- # return 0 00:42:43.542 13:53:35 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:43.542 13:53:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:43.542 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:42:43.542 13:53:35 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:43.542 13:53:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:43.542 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:42:43.800 13:53:35 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:43.800 13:53:35 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:43.800 13:53:35 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:43.800 13:53:35 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:43.800 13:53:35 -- spdk/autotest.sh@394 -- # hostname 00:42:43.800 13:53:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:43.800 geninfo: WARNING: invalid characters removed from testname! 00:43:15.860 13:54:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:19.138 13:54:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:21.662 13:54:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:24.939 13:54:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:28.214 13:54:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:30.743 13:54:22 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:34.024 13:54:25 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:34.024 13:54:25 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:43:34.024 13:54:25 -- common/autotest_common.sh@1691 -- $ lcov --version 00:43:34.024 13:54:25 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:43:34.024 13:54:25 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:43:34.024 13:54:25 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:34.024 13:54:25 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:34.024 13:54:25 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:34.025 13:54:25 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:34.025 13:54:25 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:34.025 13:54:25 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:34.025 13:54:25 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:34.025 13:54:25 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:34.025 13:54:25 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:34.025 13:54:25 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:34.025 13:54:25 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:34.025 13:54:25 -- scripts/common.sh@344 -- $ case "$op" in 00:43:34.025 13:54:25 -- scripts/common.sh@345 -- $ : 1 00:43:34.025 13:54:25 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:34.025 13:54:25 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:34.025 13:54:25 -- scripts/common.sh@365 -- $ decimal 1 00:43:34.025 13:54:25 -- scripts/common.sh@353 -- $ local d=1 00:43:34.025 13:54:25 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:34.025 13:54:25 -- scripts/common.sh@355 -- $ echo 1 00:43:34.025 13:54:25 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:34.025 13:54:25 -- scripts/common.sh@366 -- $ decimal 2 00:43:34.025 13:54:25 -- scripts/common.sh@353 -- $ local d=2 00:43:34.025 13:54:25 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:34.025 13:54:25 -- scripts/common.sh@355 -- $ echo 2 00:43:34.025 13:54:25 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:34.025 13:54:25 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:34.025 13:54:25 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:34.025 13:54:25 -- scripts/common.sh@368 -- $ return 0 00:43:34.025 13:54:25 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:34.025 13:54:25 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:43:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.025 --rc genhtml_branch_coverage=1 00:43:34.025 --rc genhtml_function_coverage=1 00:43:34.025 --rc genhtml_legend=1 00:43:34.025 --rc geninfo_all_blocks=1 00:43:34.025 --rc geninfo_unexecuted_blocks=1 00:43:34.025 00:43:34.025 ' 00:43:34.025 13:54:25 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:43:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.025 --rc genhtml_branch_coverage=1 00:43:34.025 --rc genhtml_function_coverage=1 00:43:34.025 --rc genhtml_legend=1 00:43:34.025 --rc geninfo_all_blocks=1 00:43:34.025 --rc geninfo_unexecuted_blocks=1 00:43:34.025 00:43:34.025 ' 00:43:34.025 13:54:25 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:43:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.025 --rc genhtml_branch_coverage=1 00:43:34.025 --rc genhtml_function_coverage=1 00:43:34.025 --rc genhtml_legend=1 00:43:34.025 --rc geninfo_all_blocks=1 00:43:34.025 --rc geninfo_unexecuted_blocks=1 00:43:34.025 00:43:34.025 ' 00:43:34.025 13:54:25 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:43:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.025 --rc genhtml_branch_coverage=1 00:43:34.025 --rc genhtml_function_coverage=1 00:43:34.025 --rc genhtml_legend=1 00:43:34.025 --rc geninfo_all_blocks=1 00:43:34.025 --rc geninfo_unexecuted_blocks=1 00:43:34.025 00:43:34.025 ' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:34.025 13:54:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:34.025 13:54:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:34.025 13:54:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:34.025 13:54:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:34.025 13:54:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.025 13:54:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.025 13:54:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.025 13:54:25 -- paths/export.sh@5 -- $ export PATH 00:43:34.025 13:54:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.025 13:54:25 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:34.025 13:54:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:34.025 13:54:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728906865.XXXXXX 00:43:34.025 13:54:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728906865.n2PqVi 00:43:34.025 13:54:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:34.025 13:54:25 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:43:34.025 13:54:25 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:34.025 13:54:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:34.025 13:54:25 -- common/autotest_common.sh@10 -- $ set +x 00:43:34.025 13:54:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:43:34.025 13:54:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:34.025 13:54:25 -- pm/common@17 -- $ local monitor 00:43:34.025 13:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.025 13:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.025 13:54:25 -- pm/common@21 -- $ date +%s 00:43:34.025 13:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.025 13:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.025 13:54:25 -- pm/common@21 -- $ date +%s 00:43:34.025 13:54:25 -- pm/common@25 -- $ sleep 1 00:43:34.025 13:54:25 -- pm/common@21 -- $ date +%s 00:43:34.025 13:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728906865 00:43:34.025 13:54:25 -- pm/common@21 -- $ date +%s 00:43:34.025 13:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728906865 00:43:34.025 13:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728906865 00:43:34.025 13:54:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728906865 00:43:34.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728906865_collect-cpu-load.pm.log 00:43:34.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728906865_collect-vmstat.pm.log 00:43:34.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728906865_collect-cpu-temp.pm.log 00:43:34.025 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728906865_collect-bmc-pm.bmc.pm.log 00:43:34.965 13:54:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:34.965 13:54:26 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:34.965 13:54:26 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:34.965 13:54:26 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:34.965 13:54:26 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:34.965 13:54:26 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:34.965 13:54:26 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:34.965 13:54:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:34.965 13:54:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:34.965 13:54:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.965 13:54:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:34.965 13:54:26 -- pm/common@44 -- $ pid=492843 00:43:34.965 13:54:26 -- pm/common@50 -- $ kill -TERM 492843 00:43:34.965 13:54:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.965 13:54:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:34.965 13:54:26 -- pm/common@44 -- $ pid=492845 00:43:34.965 13:54:26 -- pm/common@50 -- $ kill -TERM 492845 00:43:34.965 13:54:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.965 13:54:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:34.965 13:54:26 -- pm/common@44 -- $ pid=492848 00:43:34.965 13:54:26 -- pm/common@50 -- $ kill -TERM 492848 00:43:34.965 13:54:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:34.965 13:54:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:34.965 13:54:26 -- pm/common@44 -- $ pid=492880 00:43:34.965 13:54:26 -- pm/common@50 -- $ sudo -E kill -TERM 492880 00:43:34.965 + [[ -n 5947 ]] 00:43:34.965 + sudo kill 5947 00:43:34.976 [Pipeline] } 00:43:34.993 [Pipeline] // stage 00:43:34.999 [Pipeline] } 00:43:35.014 [Pipeline] // timeout 00:43:35.019 [Pipeline] } 00:43:35.034 [Pipeline] // catchError 00:43:35.039 [Pipeline] } 00:43:35.054 [Pipeline] // wrap 00:43:35.060 [Pipeline] } 00:43:35.073 [Pipeline] // catchError 00:43:35.082 [Pipeline] stage 00:43:35.085 [Pipeline] { (Epilogue) 00:43:35.099 [Pipeline] catchError 00:43:35.101 [Pipeline] { 00:43:35.115 [Pipeline] echo 00:43:35.116 Cleanup processes 00:43:35.122 [Pipeline] sh 00:43:35.411 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:35.411 493041 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:35.411 493154 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:35.426 [Pipeline] sh 00:43:35.715 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:35.715 ++ grep -v 'sudo pgrep' 00:43:35.715 ++ awk '{print $1}' 00:43:35.715 + sudo kill -9 493041 00:43:35.729 [Pipeline] sh 00:43:36.016 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:48.225 [Pipeline] sh 00:43:48.511 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:48.511 Artifacts sizes are good 00:43:48.528 [Pipeline] archiveArtifacts 00:43:48.536 Archiving artifacts 00:43:49.069 [Pipeline] sh 00:43:49.353 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:49.367 [Pipeline] cleanWs 00:43:49.377 [WS-CLEANUP] Deleting project workspace... 00:43:49.377 [WS-CLEANUP] Deferred wipeout is used... 00:43:49.384 [WS-CLEANUP] done 00:43:49.386 [Pipeline] } 00:43:49.403 [Pipeline] // catchError 00:43:49.414 [Pipeline] sh 00:43:49.698 + logger -p user.info -t JENKINS-CI 00:43:49.706 [Pipeline] } 00:43:49.720 [Pipeline] // stage 00:43:49.726 [Pipeline] } 00:43:49.741 [Pipeline] // node 00:43:49.746 [Pipeline] End of Pipeline 00:43:49.791 Finished: SUCCESS